Conversation
This reverts commit d3e17af.
Cleans the backend(intel_graph.*) code in the following ways:- 1. Minimize global usage: Since all the IR graphs need to be re-generated on every Infer, it is bad practice to rely on globals for their saving and usage as there would be multiple readers and writers to the same global variable leading to incorrect usages or contentions. This change replaces globals with locals where possible. This change also fixes an existing bug with due to incorrect global usage. 2. Remove all unused functions. 3. Remove all unused headers and prepocessor directives.
Backend logic cleanup See merge request iot-edge/onnxrt-unified-ep/onnxruntime!9
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
Fix missed plugins.xml for python bindings See merge request iot-edge/onnxrt-unified-ep/onnxruntime!10
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
Added environment variable to enable debugging See merge request iot-edge/onnxrt-unified-ep/onnxruntime!11
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
Unsqueeze fix See merge request iot-edge/onnxrt-unified-ep/onnxruntime!12
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
Overwrites the shape info of Model proto with the shape from actual input data. Needed for inferring models with Dynamic shapes.
Signed-off-by: suryasidd <surya.siddharth.pemmaraju@intel.com>
|
/azp run Linux CPU CI Pipeline,Linux GPU CI Pipeline,Linux GPU TensorRT CI Pipeline,MacOS CI Pipeline,Win CPU x86 CI Pipeline,Windows GPU CI Pipeline,Windows GPU TensorRT CI Pipeline,Win CPU x64 NoContribops CI Pipeline,MacOS NoContribops CI Pipeline,Linux CPU x64 NoContribops CI Pipeline,Windows CPU CI Pipeline,Linux OpenVINO CI Pipeline |
|
Azure Pipelines successfully started running 10 pipeline(s). |
… CI builds" This reverts commit 89e72ad.
|
The above pipeline failure seems to be due to Inference Engine headers not being found, which is likely due to the OpenVINO paths not being set. The only commit that modified the way paths are set is the recent change that uses the variable OPENVINO_VERSION to set the paths. The docker image built using this changes seems to have all the paths set accurately when tested on my local workstation. However, platform differences between my workstation and the CI machine may be causing this to fail. |
This is no longer used in recent ONNX update onnx/onnx@da13be2, so this unset workaround is no longer necessary.
|
/azp run Linux OpenVINO CI Pipeline |
|
Azure Pipelines successfully started running 1 pipeline(s). |
Git Tag info for DLDT as well as install directory are set using this value. This reverts commit 9fa9c20.
|
/azp run Linux OpenVINO CI Pipeline |
|
Azure Pipelines successfully started running 1 pipeline(s). |
|
/azp run Linux CPU CI Pipeline,Linux GPU CI Pipeline,Linux GPU TensorRT CI Pipeline,MacOS CI Pipeline,Win CPU x86 CI Pipeline,Windows GPU CI Pipeline,Windows GPU TensorRT CI Pipeline,Win CPU x64 NoContribops CI Pipeline,MacOS NoContribops CI Pipeline,Linux CPU x64 NoContribops CI Pipeline,Windows CPU CI Pipeline |
|
Azure Pipelines successfully started running 9 pipeline(s). |
|
setup.py has conflict |
|
/azp run Linux OpenVINO CI Pipeline |
|
Azure Pipelines successfully started running 1 pipeline(s). |
|
/azp run Linux CPU CI Pipeline,Linux GPU CI Pipeline,Linux GPU TensorRT CI Pipeline,MacOS CI Pipeline,Win CPU x86 CI Pipeline,Windows GPU CI Pipeline,Windows GPU TensorRT CI Pipeline,Win CPU x64 NoContribops CI Pipeline,MacOS NoContribops CI Pipeline,Linux CPU x64 NoContribops CI Pipeline,Windows CPU CI Pipeline |
|
Azure Pipelines successfully started running 9 pipeline(s). |
Description:
Version update for OpenVINO Execution Provider. Based on the latest OpenVINO release 2020.2 which now includes nGraph stack. Model conversion from ONNX to OpenVINO IR is now done using nGraph ONNX importer component.
Motivation and Context