Add initial lowering of aten.convolution to tosa.conv2d support#615
Add initial lowering of aten.convolution to tosa.conv2d support#615tatwaichong wants to merge 1 commit intopytorch:mainfrom tatwaichong:tosa_conv2d
Conversation
✅ Deploy Preview for resplendent-gnome-14e531 ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
|
Thanks Tatwai for the patch. One question, does the DepthwiseConv2d also work? I didn't see the test cases where the group number is greater |
|
@digantdesai @cccclai Please also help review this patch. tks! |
|
@Jerry-Ge DepthwiseConv2d support will be in another patch. |
cccclai
left a comment
There was a problem hiding this comment.
Thanks for the pr. Just comment some nits. Let's iterate on it
digantdesai
left a comment
There was a problem hiding this comment.
Thanks for the diff. At a high level it looks good. Let's try to address some comments Chen and I left here.
|
Add non-bias conv support by creating zero tensor. |
|
Looks good, let's fix the class counter, and a couple of other nit comments. |
backends/arm/arm_backend.py
Outdated
|
|
||
| assert isinstance(p_data, torch.Tensor), "Expect Attr to be tensor" | ||
| weight_values = p_data.detach().numpy() | ||
| parameter_values = p_data.detach().numpy() |
There was a problem hiding this comment.
nit
| parameter_values = p_data.detach().numpy() | |
| buffer_values = p_data.detach().numpy() |
digantdesai
left a comment
There was a problem hiding this comment.
LGTM thanks @tatwaichong !
https://github.com/pytorch/executorch/pull/615/files#r1349274983 - can we resolve this before merging?
|
Hi, I responded at the conversation above directly to see if I catch your suggestion. |
|
@digantdesai has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
|
Remove |
|
@digantdesai has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
|
@digantdesai merged this pull request in 51d6afa. |
Picked f0f4db8 and it seems to mitigate the crashes that manifested as follows: ``` 0 0x104ac3648 __assert_rtn + 72 1 0x1049ebc5c ld::Fixup::applyFixup(ld::Atom const*, ld::LayoutLinkedImage const&, unsigned char*) const + 8268 2 0x104a7e7d8 ___ZN2ld16LayoutExecutable27writeContentWithoutLinkEditENSt3__14spanIhLm18446744073709551615EEEy_block_invoke + 332 3 0x19af0a428 _dispatch_client_callout2 + 20 4 0x19af1e850 _dispatch_apply_invoke3 + 336 5 0x19af0a3e8 _dispatch_client_callout + 20 6 0x19af0bc68 _dispatch_once_callout + 32 7 0x19af1eeec _dispatch_apply_invoke_and_wait + 372 8 0x19af1de9c _dispatch_apply_with_attr_f + 1212 9 0x19af1e08c dispatch_apply + 96 10 0x104a7e9e4 void mapReduce<ld::Atom const*, mach_o::Error>(std::__1::span<ld::Atom const*, 18446744073709551615ul>, unsigned long, void (unsigned long, mach_o::Error&, std::__1::span<ld::Atom const*, 18446744073709551615ul>) block_pointer, void (std::__1::span<mach_o::Error, 18446744073709551615ul>) block_pointer) + 336 11 0x104a7e594 ld::LayoutExecutable::writeContentWithoutLinkEdit(std::__1::span<unsigned char, 18446744073709551615ul>, unsigned long long) + 1180 12 0x104a84020 ld::LayoutExecutable::writeToFile(char const*) + 15248 13 0x104a362e8 main + 9424 ld: Assertion failed: (extras.otherInstrOffset != 0 && "Kind::arm64_adrp_ldr missing extra info"), function applyFixup, file Fixup.cpp, line 793. clang: error: linker command failed with exit code 1 (use -v to see invocation) ``` TODOS: - [ ] Bisect this to a specific change - [ ] Check if moving to newer Xcode will work - [ ] Write a workflow that auto-updates PT + ET pins
This change add quantized int8 support and some test cases.