Skip to content

cherry picked commits for rel-1.8.1#8076

Merged
harshithapv merged 14 commits intorel-1.8.1from
havenka/cherry-picks-1.8.1
Jun 18, 2021
Merged

cherry picked commits for rel-1.8.1#8076
harshithapv merged 14 commits intorel-1.8.1from
havenka/cherry-picks-1.8.1

Conversation

@harshithapv
Copy link
Contributor

The following 15 commits as of 06/16/2021 are cherry-picked in this PR for rel-1.8.1.
Reduce Kernel Optimization component:training-core release:1.8.1
#8067 by iK1D was merged 8 hours ago
• Approved

fix PATH addition in windows release:1.8.1
#8043 by iperov was merged 23 hours ago
• Approved

[js/react_native] Use a mobile ORT instead of a full ORT release:1.8.1
#8042 by hanbitmyths was merged 23 hours ago
• Approved

Fix Memory Leak from DlpackToOrtValue component:training-core release:1.8.1
#8029 by iK1D was merged 6 days ago
• Approved

Make logic in InsertCastTransformer around forcing a node to fp32 more precise. release:1.8.1
#8018 by skottmckay was merged 6 days ago
• Approved

Save module output for backward if needed component:ortmodule release:1.8.1
#8010 by SherlockNoMad was merged 6 days ago
• Approved

Zip Windows GPU Package missing CUDA provider files release:1.8.1
#8002 by RyanUnderhill was merged 7 days ago
• Approved

add missing provider_options.h in packages release:1.8.1
#7995 by jywu-msft was merged 8 days ago
• Approved

Update DirectML EP changes from DmlDev as of 2021-06-07 release:1.8.1
#7987 by sumitsays was merged 5 days ago
• Approved

Update CMakeLists.txt for openvino EP release:1.8.1
#7980 by snnn was merged 9 days ago
• Approved

Override ORTModule named_modules to support extra arg component:ortmodule component:training-frontend release:1.8.1
#7954 by baijumeswani was merged 8 days ago
• Approved

Cache initializers to avoid iterating over them on every forward component:ortmodule component:training-frontend release:1.8 release:1.8.1
#7905 by baijumeswani was merged 14 days ago
• Approved

Add SoftmaxCrossEntropyLossInternal to Support Dynamic ignore_index Input ( conflicts with #7937 and auto grad PR #7513 )
component:ortmodule component:training-core release:1.8.1
#7899 by iK1D was merged 8 days ago
• Approved

Change CMAKE_CUDA_STANDARD to C++17 for Windows GPU build release:1.8.1
#7883 by snnn was merged 15 days ago
• Approved

ATenOp Enhancement component:training-core release:1.8.1 (had conflicts with autograd PR - #7513)
#7725 by iK1D was merged 9 days ago
• Approved

baijumeswani and others added 14 commits June 16, 2021 18:11
* config parser, default argument values

* ut

* win build

* maxpool2d

* fix win build

* fix build

* unfold atenop
…nput (#7899)

* add SoftmaxCrossEntropyLossInternal

* bugfix and ut

* fix ut

* fix ut

* support torch1.8.1

* function body for nll_loss_internal
* consolidate copy binary script for gpu/trt tarball package

* add provider_options.h

* add provider_options.h
* Save module output for backward if needed
…e precise. (#8018)

* Address #7981

Reworked the logic around forcing a node to run on fp32 even if it was supported on fp16.

The github issue had multiple factors. In ORT 1.8 we remove Identity nodes that produce graph outputs as they're not needed. That resulted in a Loop node no longer having output nodes (it produces graph outputs instead), which meant the check in IsSingleInputNodeFloat16Node returned true as there was no longer a downstream Identity node processing fp16 data.

We shouldn't only force a node to fp32 in very specific circumstances, and the changes hopefully check for those more precisely.
* Merged PR 6093117: Fix test_DynamicQuantizedLinear_max_adjusted_expanded by allowing Identity operator to run on non-float inputs

Motivation:
As part of the OnnxConformance Backend tests, DynamicQuantizedLinear_max_adjusted_expanded is failing.

Root Cause:
- The test model has `Identity` operator as one of the node. The input of this node is of non-float data type.
- In DML, `Identity` operator is registered as operator which requires floating input.
- As per `DirectMLSchema.h`, support for non-float input has been added for `Identity` operator in DML but the same has not been reflected in the `OperatorRegistration.cpp`.

Changes:
- Removed all traces of the requiresFloatFormatsForGraph flag from it's definition and usage. This flag was only used for Identity and it's related operator.
- Added null check for the graphOutput nodeArg in GraphDescBuilder.cpp to stop the crash of the test.

Related work items: #33076298

* Merged PR 6103324: Remove usage of non-generic error code (FWP_E_NULL_POINTER)

Motivation:
Addressing Dwayne comment on the previous PR. [Ref: [6093117](https://dev.azure.com/microsoft/WindowsAI/_git/onnxruntime/pullrequest/6093117?discussionId=44292162&path=%2Fonnxruntime%2Fcore%2Fproviders%2Fdml%2FDmlExecutionProvider%2Fsrc%2FGraphPartitioner.cpp)]

Changes:
Inside the DML EP, we should not use some other platform specific error codes. Instead we should a appropriate generic error code.

Related work items: #33076298

Co-authored-by: Sumit Agarwal <sumitagarwal@microsoft.com>
* Change full ort to mobile ort

* Update Android example to load mobile ort

* Change the format of test models to ort

* update ios to use mobile ort

* revise README

* use onnxruntime-mobile-c CocoaPods in a npm package
should set PATH, not add to the tail the copy of PATH
* reduce optimization

* bug fix

* add a check

* add ut

* refactor

* add ut cases for keepdims=true
Copy link
Contributor

@pranavsharma pranavsharma left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see a few training PRs that wouldn't qualify to be in a patch release. But I'll let the training team make that call.

@snnn
Copy link
Contributor

snnn commented Jun 18, 2021

We also need this: #8066

@snnn
Copy link
Contributor

snnn commented Jun 18, 2021

This time, please help us manually check if every GPU package has

  1. libonnxruntime_providers_cuda.so
  2. libonnxruntime_providers_shared.so
  3. onnxruntime_providers_shared.dll
  4. onnxruntime_providers_cuda.dll

@pranavsharma
Copy link
Contributor

This time, please help us manually check if every GPU package has

  1. libonnxruntime_providers_cuda.so
  2. libonnxruntime_providers_shared.so
  3. onnxruntime_providers_shared.dll
  4. onnxruntime_providers_cuda.dll

@RyanUnderhill can you please help ensure this is the case? thanks!

Copy link
Contributor

@thiagocrepaldi thiagocrepaldi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FE looks good

@harshithapv
Copy link
Contributor Author

We also need this: #8066

We will pick this in the next round.

@harshithapv harshithapv merged commit 8c0c25c into rel-1.8.1 Jun 18, 2021
@harshithapv harshithapv deleted the havenka/cherry-picks-1.8.1 branch June 18, 2021 14:44
@RyanUnderhill
Copy link
Contributor

This time, please help us manually check if every GPU package has

  1. libonnxruntime_providers_cuda.so
  2. libonnxruntime_providers_shared.so
  3. onnxruntime_providers_shared.dll
  4. onnxruntime_providers_cuda.dll

@RyanUnderhill can you please help ensure this is the case? thanks!

Sure thing, should I run the packaging pipelines on this change or on the 1.8.1 branch?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.