Migrate ExecuTorch's use of pt2e from torch.ao to torchao#10294
Migrate ExecuTorch's use of pt2e from torch.ao to torchao#10294
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/10294
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 New Failure, 1 Unrelated FailureAs of commit 5f9da89 with merge base 9aaea31 ( NEW FAILURE - The following job has failed:
FLAKY - The following job failed but was likely due to flakiness present on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This PR needs a
|
| from executorch.exir.backend.compile_spec_schema import CompileSpec | ||
| from torch.ao.quantization.fake_quantize import ( | ||
| from torch.fx import GraphModule, Node | ||
| from torchao.quantization.pt2e import _ObserverOrFakeQuantizeConstructor |
There was a problem hiding this comment.
Correct me if I am wrong but torchao isn't a mandetory dep today but now it is?
There was a problem hiding this comment.
How do we define mandatory dependencies? It is installed by the install_requirements script?
There was a problem hiding this comment.
Seems like we pull in source -
Line 60 in 647e1f1
So this submodule is already updated since the tests are passing here.
check (1) if we run tests on et wheels with something quant, (2) if we do are they passing for this diff.
| QuantizationConfig, | ||
| ) | ||
| from torch.ao.quantization.fake_quantize import ( | ||
| from torchao.quantization.pt2e.fake_quantize import ( |
|
Adding partners for visibility |
| from torch.ao.quantization.pt2e.graph_utils import find_sequential_partitions | ||
| from torch.ao.quantization.quantizer import QuantizationSpec, Quantizer | ||
| from torchao.quantization.pt2e import find_sequential_partitions | ||
| from torchao.quantization.pt2e.observer import HistogramObserver, MinMaxObserver |
There was a problem hiding this comment.
we can remove observer here
| from torchao.quantization.pt2e.fake_quantize import FakeQuantize | ||
| from torchao.quantization.pt2e.observer import MinMaxObserver, PerChannelMinMaxObserver |
There was a problem hiding this comment.
can remove observer and fake_quantize from path
Looks reasonable to me. Let's just use trunk to trigger more CI jobs. |
trunk is already triggered. Thanks! |
|
@metascroy has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
|
@metascroy has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
|
This pull request was exported from Phabricator. Differential Revision: D74694311 |
Summary: Most code related to PT2E quantization is migrating from torch.ao.quantization to torchao.quantization.pt2e. torchao.quantization.pt2e contains an exact copy of PT2E code in torch.ao.quantization. The torchao pin in ExecuTorch has already been bumped pick up these changes. Pull Request resolved: #10294 Reviewed By: SS-JIA Differential Revision: D74694311 Pulled By: metascroy
|
This pull request was exported from Phabricator. Differential Revision: D74694311 |
Summary: Most code related to PT2E quantization is migrating from torch.ao.quantization to torchao.quantization.pt2e. torchao.quantization.pt2e contains an exact copy of PT2E code in torch.ao.quantization. The torchao pin in ExecuTorch has already been bumped pick up these changes. Pull Request resolved: #10294 Reviewed By: SS-JIA Differential Revision: D74694311 Pulled By: metascroy
|
This pull request was exported from Phabricator. Differential Revision: D74694311 |
Summary: Most code related to PT2E quantization is migrating from torch.ao.quantization to torchao.quantization.pt2e. torchao.quantization.pt2e contains an exact copy of PT2E code in torch.ao.quantization. The torchao pin in ExecuTorch has already been bumped pick up these changes. Pull Request resolved: #10294 Reviewed By: SS-JIA Differential Revision: D74694311 Pulled By: metascroy
|
@metascroy has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
|
This pull request was exported from Phabricator. Differential Revision: D74694311 |
Summary: Most code related to PT2E quantization is migrating from torch.ao.quantization to torchao.quantization.pt2e. torchao.quantization.pt2e contains an exact copy of PT2E code in torch.ao.quantization. The torchao pin in ExecuTorch has already been bumped pick up these changes. Pull Request resolved: #10294 Reviewed By: SS-JIA Differential Revision: D74694311 Pulled By: metascroy
Summary: Most code related to PT2E quantization is migrating from torch.ao.quantization to torchao.quantization.pt2e. torchao.quantization.pt2e contains an exact copy of PT2E code in torch.ao.quantization. The torchao pin in ExecuTorch has already been bumped pick up these changes. Pull Request resolved: #10294 Reviewed By: SS-JIA Differential Revision: D74694311 Pulled By: metascroy
|
This pull request was exported from Phabricator. Differential Revision: D74694311 |
|
Closing this. The migration was instead done in pieces. |
Most code related to PT2E quantization is migrating from torch.ao.quantization to torchao.quantization.pt2e.
torchao.quantization.pt2e contains an exact copy of PT2E code in torch.ao.quantization.
The torchao pin in ExecuTorch has already been bumped pick up these changes.