Conversation
otherwise hard to find the source code
…4104) * add yml files * update pipeline * fix yaml syntax * yaml pop BuildCSharp * udpate yaml * do not stage codesign summary
* try mac pipeline * fix path separator * copy prebuilds folder * split esrp yaml for win/mac * disable mac signing temporarily * add linux * fix indent * add nodetool in linux * add nodetool in win-ci-2019 * replace linux build by custom docker scripts * use manylinux as node 12.16 not working on centos6 * try ubuntu * loosen timeout for test case - multiple runs calls
* [java] - adding a cuda enabled test. * Adding --build_java to the windows gpu ci pipeline. * Removing a stray line from the unit tests that always enabled CUDA for Java.
onnxruntime init failure due to wrong path of reading native libraries. In OS X 64 system, the arch name is detected as x86 which generates invalid path to read native libraries. Exception java.lang.UnsatisfiedLinkError: no onnxruntime in java.library.path at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1867) at java.lang.Runtime.loadLibrary0(Runtime.java:870) at java.lang.System.loadLibrary(System.java:1122) at ai.onnxruntime.OnnxRuntime.load(OnnxRuntime.java:174) at ai.onnxruntime.OnnxRuntime.init(OnnxRuntime.java:81) at ai.onnxruntime.OrtEnvironment.<clinit>(OrtEnvironment.java:24)
Create CPU and GPu Java publishing pipelines. Final jars are tested on all platforms. However, signing and publishing to maven are manual steps.
Modify gradle build so artifactID has _gpu for GPU builds. Pass USE_CUDA flag on CUDA build Adjust publishing pipelines to extract POM from a correct path. Co-Authored-By: @Craigacp
* Symbolic shape inference exit on models without onnx opset used * Temporary fix for ConvTranspose with symbolic input dims Co-authored-by: Changming Sun <me@sunchangming.com>
1. Enlarge the read buffer size further, so that our code can run even faster. TODO: need apply the similar changes to python some other language bindings. 2. Add coreml_VGG16_ImageNet to the test exclusion set of x86_32. It is not a new model but previously we didn't run the test against x86_32.
|
@snnn , I solved some conflicts in your PRs. Please take a look . 2ab3a19 const bool result = model_proto.ParseFromFileDescriptor(fd); conflict: onnxruntime/test/onnx/TestCase.cc |
| skipModels["tf_nasnet_large"] = "Get preallocated buffer for initializer ConvBnFusion_BN_B_cell_11/beginning_bn/beta:0_331 failed"; | ||
| skipModels["test_zfnet512"] = "System out of memory"; | ||
| skipModels["test_bvlc_reference_caffenet"] = "System out of memory"; | ||
| skipModels["coreml_VGG16_ImageNet"] = "System out of memory"; |
There was a problem hiding this comment.
Please insert the line below to:
https://github.com/microsoft/onnxruntime/blob/master/csharp/test/Microsoft.ML.OnnxRuntime.Tests/InferenceTest.cs#L558
{ "coreml_Imputer-LogisticRegression_sklearn_load_breast_cancer", "Can't determine model file name" },
It is picked from 3eaec57
There was a problem hiding this comment.
Windows CPU CI Pipeline still failed with the same error after I cherry picked those 4 PRs,
https://dev.azure.com/onnxruntime/onnxruntime/_build/results?buildId=168533&view=logs&j=5b022bb4-70a7-5401-8766-a8a7802c7150&t=23e58576-d346-55e7-f729-22ace67bada2&l=98
##[error]Error: The process 'C:\Program Files\dotnet\dotnet.exe' failed with exit code 1
Info: Azure Pipelines hosted agents have been updated to contain .Net Core 3.x (3.0 and 3.1) SDK/Runtime along with 2.1. Unless you have locked down a SDK version for your project(s), 3.x SDK might be picked up which might have breaking behavior as compared to previous versions.
Some commonly encountered changes are:
If you're using Publish command with -o or --Output argument, you will see that the output folder is now being created at root directory rather than Project File's directory. To learn about more such changes and troubleshoot, refer here: https://docs.microsoft.com/en-us/azure/devops/pipelines/tasks/build/dotnet-core-cli?view=azure-devops#troubleshooting
##[error]Dotnet command failed with non-zero exit code on the following projects : C:\agent_work\1\s\csharp\test\Microsoft.ML.OnnxRuntime.Tests\Microsoft.ML.OnnxRuntime.Tests.csproj
Java GPu artifact naming (#4179)
Change group id to com.microsoft.onnxruntime per requirements.
Create Java publishing pipeline (#3944)
Update OnnxRuntime.java for OS X environment. (#3985)
[java] Adds a CUDA test (#3956)
add script to support update nodejs binding version (#4164)
[Node.js binding] add linux and mac package (#4157)
[Node.js binding] upgrade node-addon-api to 3.0 (#4148)
fix build: pipeline Node.js version to 12.16.3 (#4145)
[Nodejs binding] create a new pipeline to generate signed binaries (#4104)
[Node.js binding] add build flag for node.js binding (#3948)
[Node.js binding] fix linux build (#3927)
link to folder instead of READMEs inside folder (#3938)
bump up ORT version to 1.3.1 (#4181)
move back to toolset 14.16 to possibly work around nvcc bug (#4180)
Symbolic shape inference exit on models without onnx opset used (#4090)
Fix Nuphar test failure
Enlarge the read buffer size in C#/Java test code (#4150)
Temporarily disable windows static analysis CI job