-
Notifications
You must be signed in to change notification settings - Fork 3.8k
[Frontend][PyTorch] torch2.0 ExportedProgram PoC #16531
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
chunit-quic
commented
Feb 7, 2024
- Add converter for torch2.0 ExportedProgram
- Add converter for torch2.0 ExportedProgram
|
Hi community, This is a PoC to support ExportedProgram, which is a new format in pytorch 2.0. May we know do you have any plan to support ExportedProgram? Or maybe this PR look fine to be a start point? :D ExportedProgram is a new format in pytorch2.0. After exporting to ExportedProgram, a model represented by Module or functional will be flatten and represented by core aten op. One of its benefits is that, we can focus on writing converting function for those core aten ops. It can decrease the effort to support different Module and functional. We are wondering do you have any plan to support this format as well? if there is no such plan, perhaps this PR can be a simple reference to support ExportedProgram conversion. Or if it looks good to you, we can submit a formal one based on your comments, and add this to converter. It will be really nice to have your thoughts. Thanks! ref: Best, |
|
Thank you for the proposal! With the introduction of the SLM, we now are able to utilize the TVM nn.module for supporting models created with Torch, which has been working well afaik. Not sure if the SLM works for the cases you are working on. Here is the llama model implemented via SLM (the SLM llama model). Looks the Torch FX is the underlying representation of the exported graph, I am wondering if it is possible to use/update the existing FX translator (relax/frontend/torch/fx_translator.py) to support the |
|
Hi @yongwww Thanks for your prompt reply! Here is our thoghts for your reference. :D
Pardon that we didn't investigated SLM before. We can let some teamates to read on it later.
Please allow me to describe more detial about this PR, and what problems we may encounter if we want to integrate it into FX translator.
Thank you for reading this long reply. It will be really nice to have your advice! Best, |
|
@chunit-quic thanks for the reply! I'm new to the Torch ExportedProgram, I noticed that the export API is still under development (pytorch/executorch#290). If For insights into SLM, please refer to this documentation and explore the code in |
|
Hi @yongwww, Thank you for providing the SLM materials. :)
Yes, we think it has more benefit to use ExportedProgram, since every functional/nn.module can be exported to ExportedProgram and their functional op/Module will be decomposed to core aten ops which only has ~200 ops. It means that we only need to write translator for these ~200 ops and then it can support all pytorch model conversion. However, it is hard to reach the sky in a single bound. If we update existing fx translator, we need to make a whole change, and it may break existing support models. Hopefully, this new translator should have the same capacity of existing translator. If so, we can focus on ExportedProgram rather than fx translator.
Thanks for your introduction, this is awesome feature to debug modularized relay module. However, our issues are more related to unsupported op conversion in the fx translator, and this feature still requires us to write translators for the new functional/nn.Module. That is the reason we want to leverage the ExportedProgram because it will decompose all functional/nn.Module into limitation set of ops. |
|
Thanks @chunit-quic for the contribution! I think there are several goals:
I think we all agree that G1 is super important. I would like us to think about how we can unify, so we don't have two versions of the torch importer. Here is one possible way to do so:
|
|
@chunit-quic let me know if it can help address the problems you see, love to working together and get this feature in! |
|
Hi @tqchen, Thank you for joining the discussion! Just one more thing. Would you like us to submit a RFC, new PR or keep updaing this one? Thank you. |
|
I think update this PR would be sufficient |
|
@chunit-quic just want to check in and see where we are on this |
|
Hi @tqchen, Thank you so much for your attention and effort on this pull request. We truly appreciate your suggestion. Unfortunately, after careful consideration, we've decided to withdraw the pull request at this time. We hope to collaborate again in the future and appreciate your understanding. |