NXP backend: Use zero point for quantized padding.#13576
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/13576
Note: Links to docs will display an error until the docs builds have been completed. ❌ 3 New Failures, 1 PendingAs of commit 54ef48c with merge base c70aeda ( NEW FAILURES - The following jobs have failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
@pytorchbot label "module: nxp" "release notes: nxp" |
| # be included in the computation! | ||
| input_quantization = t_op.tmp_inputs[0].quantization | ||
| pad_value = ( | ||
| None |
There was a problem hiding this comment.
Why None instead of 0 in non quantized?
There was a problem hiding this comment.
None is the default value of the builder.create_pad_operator_before() method's constant_value parameter. This way, the actual default padding value (0) is only defined in 1 place.
But it's hard to image that the default padding value would ever be changed, and using 0 here would make the code more understandable. I have no problem using 0 instead of None if you prefer.
0f68cd2 to
52d6c1b
Compare
52d6c1b to
54ef48c
Compare
### Summary This PR fixes cases where padding with the value `0` was used for quantized operators. Now, zero point is used instead. ### Test plan Unit tests provided. cc @digantdesai @JakeStevens @robert-kalmar
### Summary This PR fixes cases where padding with the value `0` was used for quantized operators. Now, zero point is used instead. ### Test plan Unit tests provided. cc @digantdesai @JakeStevens @robert-kalmar
Summary
This PR fixes cases where padding with the value
0was used for quantized operators. Now, zero point is used instead.Test plan
Unit tests provided.
cc @digantdesai @JakeStevens @robert-kalmar