From b6d11fe1d7f1597edca2f882e97fe129d02e1c37 Mon Sep 17 00:00:00 2001 From: root Date: Thu, 5 Aug 2021 10:04:51 +0000 Subject: [PATCH 01/10] add rfc for paddlepaddle frontend --- rfcs/add_paddlepaddle_frontend.md | 104 ++++++++++++++++++++++++++++++ 1 file changed, 104 insertions(+) create mode 100644 rfcs/add_paddlepaddle_frontend.md diff --git a/rfcs/add_paddlepaddle_frontend.md b/rfcs/add_paddlepaddle_frontend.md new file mode 100644 index 00000000..f51b639b --- /dev/null +++ b/rfcs/add_paddlepaddle_frontend.md @@ -0,0 +1,104 @@ +- Feature Name: add-paddlepaddle-frontend +- Start Date: 2021-08-08 +- RFC PR: TODO +- GitHub Issue: TODO + +# Summary +[summary]: #summary + +Add a paddlepaddle frontend, enhance TVM's campatibility of deep learning frameworks, which support PaddlePaddle>=2.0 + +# Motivation +[motivation]: #motivation + +PaddlePaddle, an independent R&D deep learning platform in China, has been officially open-sourced to professional communities since 2016. It has been widely adopted by a wide range of sectors including manufacturing, agriculture, enterprise service, and so on while serving more than 2.3 million developers. With such advantages, PaddlePaddle has helped an increasing number of partners commercialize AI. + +Currently, PaddlePaddle has built a prosperous technological ecology, there are more than 500 models developed by official organization or outside developers, including CV/NLP/OCR/Speech, for more details we can refer to the following links, + +- [PaddlePaddle/models](https://github.com/PaddlePaddle/models) +- [PaddleDetection](https://github.com/PaddlePaddle/PaddleDetection) +- [PaddleClas](https://github.com/PaddlePaddle/PaddleClas) +- [PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg) +- [PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR) +- [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) +- [DeepSpeech](https://github.com/PaddlePaddle/DeepSpeech) + +After upgrading to 2.0, PaddlePaddle supported imperative programming similar with PyTorch, but a mechanism of `Dynamic to Static` is provided, which can export PaddlePaddle model as graph representation and more friendly for deployment, the following example code shows how to export a PaddlePaddle model, + +``` +import paddle +import paddlehub +model = hub.Module(name="resnet50_vd_imagenet_ssld") +input_spec = paddle.static.InputSpec( + [1, 3, 224, 224], "float32", "image") +paddle.jit.save(model, "model/infer", input_spec=[input_spec]) +``` + +PaddlePaddle's deployment is supported by Paddle Inference/Paddle Lite/OpenVINO/Tengine/Adlik now. We have noticed there are lots of developers convmodel to ONNX format for TVM's supporting, but only part of models can be converted due to the lack of ONNX operators. +Based on this background, we proposed this RFC addle frontend for TVM, improve usability and extend more models support for PaddlePaddle's users. + + +# Guide-level explanation +[guide-level-explanation]: #guide-level-explanation + +If you dive in the pull request code, there's 2 concepts imported from PaddlePaddle you may want to know, +- `paddle.jit.load`: Recommended API to load exported inference model, the type of return result is `TranslatedLayer`, stores `Program`(similar with computation graph) and parameters; +- `paddle.static.load_inference_model`: API to compatible with old version PaddlePaddle's model, the type of return result is `Program`, and all the parameters save in `Scope`, for the default situation, we can extract the parameters from the `paddle.fluid.global_scope()`. + +So, this RFC also will bring a new API for TVM to support PaddlePaddle model, +``` +relay.frontend.from_paddle(program_or_layer, shape_dict=None, scope=None) +``` +- `program_or_layer`: the return result of `paddle.static.load_inference_model` or `paddle.jit.load` +- `shape_dict`: optional parameter, input shapes of the model +- `scope`: optional parameter, only available if `model` is loaded by `paddle.static.load_inference_model` + +The following example code shows how to import a PaddlePaddle model, +``` +import paddle +model = paddle.jit.load('model/infer') + +shape_dict = {'image': [1, 3, 224, 224]} +mod, params = relay.frontend.from_paddle(model, shape_dict=shape_dict) +``` + +Error may happend if there are some operators is not supported by this frontend, and the details will print out. + +# Reference-level explanation +[reference-level-explanation]: #reference-level-explanation + +Since this RFC is to add a new frontend, PaddlePaddle model will be converted to TVM Relay IR, so all the other features will not be effected. + +For this proposed RFC, the whole process of PaddlePaddle frontend importing can be divided into 2 steps: +- 1. Reading PaddlePaddle Model: The frontend supports PaddlePaddle's inference model format which is exported as graph based model by PaddlePaddle's `Dynamic to Static` mechanism, The model contains 2 files that store the model structure and parameters separately, We use `paddle.jit.load` to load the model files(For the compatibility of previous version of PaddlePaddle, `paddle.static.load_inference_model` also supported); +- 2. Operator Conversion: After the exported inference model is loaded, we will extract its parameters and convert operators one by one. Since all the operators can be iterated out by toposort, there's no need to worry about the order of converting operators. + +# Drawbacks +[drawbacks]: #drawbacks + +This may bring more time cost of unit test running. + +# Rationale and alternatives +[rationale-and-alternatives]: #rationale-and-alternatives + +The frontend of PaddlePaddle is similar with ONNX or TensorFlow. We support model loaded by `paddle.jit.load` and `paddle.static.load_inference_model`. Also we haved considered only support `paddle.jit.load` since this API is recommended after PaddlePaddle 2.0, but there are lots of users still use `paddle.static.load_inference_model`. +Currently, we have to convert PaddlePaddle model to ONNX format to make it work with TVM, but only part of models are supported due to the lack of ONNX operators and the operator difference. With a new PaddlePaddle frontend, we can support more operators and provide a better experience for TVM and PaddlePaddle's users. + +# Prior art +[prior-art]: #prior-art + +It's the first time we add a PaddlePaddle frontend to a ML compilers. + +# Unresolved questions +[unresolved-questions]: #unresolved-questions + +We will add new unit test which will rely on PaddlePaddle framework, also the test will bring more cost time, if there's any problem, please let me know. + +# Future possibilities +[future-possibilities]: #future-possibilities + +For this RFC, we have make a established plan, + +- About 200 operators will be supported in this quarter, such as deformable_conv/multiclass_nms +- Control flow operators will be supported in this year, mainly about while_loop/if/ +- Quantized model will be supported in this year, include quantized mo`PaddleDetection`/`PaddleClas`/`PaddleSeg` From b342cba9a574e0d3d1e18f8316c97c338ad42fca Mon Sep 17 00:00:00 2001 From: root Date: Thu, 5 Aug 2021 10:06:22 +0000 Subject: [PATCH 02/10] add rfc for paddlepaddle frontend --- rfcs/add_paddlepaddle_frontend.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/rfcs/add_paddlepaddle_frontend.md b/rfcs/add_paddlepaddle_frontend.md index f51b639b..485f2dda 100644 --- a/rfcs/add_paddlepaddle_frontend.md +++ b/rfcs/add_paddlepaddle_frontend.md @@ -101,4 +101,4 @@ For this RFC, we have make a established plan, - About 200 operators will be supported in this quarter, such as deformable_conv/multiclass_nms - Control flow operators will be supported in this year, mainly about while_loop/if/ -- Quantized model will be supported in this year, include quantized mo`PaddleDetection`/`PaddleClas`/`PaddleSeg` +- Quantized model will be supported in this year, include quantized model from `PaddleDetection`/`PaddleClas`/`PaddleSeg` From f11c16f148259e10fceef671dd47e0df61d2d2d4 Mon Sep 17 00:00:00 2001 From: Jason <928090362@qq.com> Date: Thu, 5 Aug 2021 19:01:09 +0800 Subject: [PATCH 03/10] Update add_paddlepaddle_frontend.md --- rfcs/add_paddlepaddle_frontend.md | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/rfcs/add_paddlepaddle_frontend.md b/rfcs/add_paddlepaddle_frontend.md index 485f2dda..ec9c092a 100644 --- a/rfcs/add_paddlepaddle_frontend.md +++ b/rfcs/add_paddlepaddle_frontend.md @@ -6,14 +6,14 @@ # Summary [summary]: #summary -Add a paddlepaddle frontend, enhance TVM's campatibility of deep learning frameworks, which support PaddlePaddle>=2.0 +Add a paddlepaddle frontend, enhance TVM's compatibility of deep learning frameworks, which support PaddlePaddle>=2.0 # Motivation [motivation]: #motivation PaddlePaddle, an independent R&D deep learning platform in China, has been officially open-sourced to professional communities since 2016. It has been widely adopted by a wide range of sectors including manufacturing, agriculture, enterprise service, and so on while serving more than 2.3 million developers. With such advantages, PaddlePaddle has helped an increasing number of partners commercialize AI. -Currently, PaddlePaddle has built a prosperous technological ecology, there are more than 500 models developed by official organization or outside developers, including CV/NLP/OCR/Speech, for more details we can refer to the following links, +Currently, PaddlePaddle has built a prosperous technological ecology, there are more than 500 models developed by official organization or outside developers, including CV/NLP/OCR/Speech, for more details we can refer to the following links, - [PaddlePaddle/models](https://github.com/PaddlePaddle/models) - [PaddleDetection](https://github.com/PaddlePaddle/PaddleDetection) @@ -34,8 +34,8 @@ input_spec = paddle.static.InputSpec( paddle.jit.save(model, "model/infer", input_spec=[input_spec]) ``` -PaddlePaddle's deployment is supported by Paddle Inference/Paddle Lite/OpenVINO/Tengine/Adlik now. We have noticed there are lots of developers convmodel to ONNX format for TVM's supporting, but only part of models can be converted due to the lack of ONNX operators. -Based on this background, we proposed this RFC addle frontend for TVM, improve usability and extend more models support for PaddlePaddle's users. +PaddlePaddle's deployment is supported by Paddle Inference/Paddle Lite/OpenVINO/Tengine/Adlik now. We have noticed there are lots of developers convmodel to ONNX format for TVM's supporting, but only part of models can be converted due to the lack of ONNX operators. +Based on this background, we proposed this RFC addle frontend for TVM, improve usability and extend more models support for PaddlePaddle's users. # Guide-level explanation @@ -62,16 +62,16 @@ shape_dict = {'image': [1, 3, 224, 224]} mod, params = relay.frontend.from_paddle(model, shape_dict=shape_dict) ``` -Error may happend if there are some operators is not supported by this frontend, and the details will print out. +Error may happened if there are some operators is not supported by this frontend, and the details will print out. # Reference-level explanation [reference-level-explanation]: #reference-level-explanation -Since this RFC is to add a new frontend, PaddlePaddle model will be converted to TVM Relay IR, so all the other features will not be effected. +Since this RFC is to add a new frontend, PaddlePaddle model will be converted to TVM Relay IR, so all the other features will not be affected. For this proposed RFC, the whole process of PaddlePaddle frontend importing can be divided into 2 steps: - 1. Reading PaddlePaddle Model: The frontend supports PaddlePaddle's inference model format which is exported as graph based model by PaddlePaddle's `Dynamic to Static` mechanism, The model contains 2 files that store the model structure and parameters separately, We use `paddle.jit.load` to load the model files(For the compatibility of previous version of PaddlePaddle, `paddle.static.load_inference_model` also supported); -- 2. Operator Conversion: After the exported inference model is loaded, we will extract its parameters and convert operators one by one. Since all the operators can be iterated out by toposort, there's no need to worry about the order of converting operators. +- 2. Operator Conversion: After the exported inference model is loaded, we will extract its parameters and convert operators one by one. Since all the operators can be iterated out by topo sorted, there's no need to worry about the order of converting operators. # Drawbacks [drawbacks]: #drawbacks @@ -100,5 +100,5 @@ We will add new unit test which will rely on PaddlePaddle framework, also the te For this RFC, we have make a established plan, - About 200 operators will be supported in this quarter, such as deformable_conv/multiclass_nms -- Control flow operators will be supported in this year, mainly about while_loop/if/ +- Control flow operators will be supported in this year, mainly about while_loop/if/ - Quantized model will be supported in this year, include quantized model from `PaddleDetection`/`PaddleClas`/`PaddleSeg` From 51877120710c9a5a3687201650fe029dbf1d4176 Mon Sep 17 00:00:00 2001 From: Jason <928090362@qq.com> Date: Thu, 5 Aug 2021 19:01:28 +0800 Subject: [PATCH 04/10] Update add_paddlepaddle_frontend.md --- rfcs/add_paddlepaddle_frontend.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/rfcs/add_paddlepaddle_frontend.md b/rfcs/add_paddlepaddle_frontend.md index ec9c092a..9d900352 100644 --- a/rfcs/add_paddlepaddle_frontend.md +++ b/rfcs/add_paddlepaddle_frontend.md @@ -100,5 +100,5 @@ We will add new unit test which will rely on PaddlePaddle framework, also the te For this RFC, we have make a established plan, - About 200 operators will be supported in this quarter, such as deformable_conv/multiclass_nms -- Control flow operators will be supported in this year, mainly about while_loop/if/ +- Control flow operators will be supported in this year, mainly about while_loop/if - Quantized model will be supported in this year, include quantized model from `PaddleDetection`/`PaddleClas`/`PaddleSeg` From d05bfea1051a1136dbaf18d2b06db66528b4b870 Mon Sep 17 00:00:00 2001 From: Jason <928090362@qq.com> Date: Thu, 5 Aug 2021 19:02:39 +0800 Subject: [PATCH 05/10] Update add_paddlepaddle_frontend.md --- rfcs/add_paddlepaddle_frontend.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/rfcs/add_paddlepaddle_frontend.md b/rfcs/add_paddlepaddle_frontend.md index 9d900352..555f52bb 100644 --- a/rfcs/add_paddlepaddle_frontend.md +++ b/rfcs/add_paddlepaddle_frontend.md @@ -41,7 +41,7 @@ Based on this background, we proposed this RFC addle frontend for TVM, improve u # Guide-level explanation [guide-level-explanation]: #guide-level-explanation -If you dive in the pull request code, there's 2 concepts imported from PaddlePaddle you may want to know, +If you dive into the pull request code, there's 2 concepts imported from PaddlePaddle you may want to know, - `paddle.jit.load`: Recommended API to load exported inference model, the type of return result is `TranslatedLayer`, stores `Program`(similar with computation graph) and parameters; - `paddle.static.load_inference_model`: API to compatible with old version PaddlePaddle's model, the type of return result is `Program`, and all the parameters save in `Scope`, for the default situation, we can extract the parameters from the `paddle.fluid.global_scope()`. From aac2788d957cd469a51b5e1f060e69e4cb38938a Mon Sep 17 00:00:00 2001 From: Jason <928090362@qq.com> Date: Thu, 5 Aug 2021 19:03:16 +0800 Subject: [PATCH 06/10] Update add_paddlepaddle_frontend.md --- rfcs/add_paddlepaddle_frontend.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/rfcs/add_paddlepaddle_frontend.md b/rfcs/add_paddlepaddle_frontend.md index 555f52bb..c3f44a5a 100644 --- a/rfcs/add_paddlepaddle_frontend.md +++ b/rfcs/add_paddlepaddle_frontend.md @@ -97,7 +97,7 @@ We will add new unit test which will rely on PaddlePaddle framework, also the te # Future possibilities [future-possibilities]: #future-possibilities -For this RFC, we have make a established plan, +For this RFC, we have made a established plan, - About 200 operators will be supported in this quarter, such as deformable_conv/multiclass_nms - Control flow operators will be supported in this year, mainly about while_loop/if From e14bc8085fa7073f3d4a7dbd9ad12571429f642f Mon Sep 17 00:00:00 2001 From: Jason <928090362@qq.com> Date: Thu, 5 Aug 2021 19:14:52 +0800 Subject: [PATCH 07/10] Update add_paddlepaddle_frontend.md --- rfcs/add_paddlepaddle_frontend.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/rfcs/add_paddlepaddle_frontend.md b/rfcs/add_paddlepaddle_frontend.md index c3f44a5a..ae8154c3 100644 --- a/rfcs/add_paddlepaddle_frontend.md +++ b/rfcs/add_paddlepaddle_frontend.md @@ -1,6 +1,6 @@ - Feature Name: add-paddlepaddle-frontend -- Start Date: 2021-08-08 -- RFC PR: TODO +- Start Date: 2021-08-05 +- RFC PR: https://github.com/apache/tvm-rfcs/pull/19 - GitHub Issue: TODO # Summary From 4b4d66dd36bc37c8160862e73d0702d4efddc5a3 Mon Sep 17 00:00:00 2001 From: will-jl944 Date: Tue, 10 Aug 2021 16:05:22 +0800 Subject: [PATCH 08/10] refinement --- rfcs/add_paddlepaddle_frontend.md | 52 +++++++++++++++---------------- 1 file changed, 26 insertions(+), 26 deletions(-) diff --git a/rfcs/add_paddlepaddle_frontend.md b/rfcs/add_paddlepaddle_frontend.md index ae8154c3..29d72f36 100644 --- a/rfcs/add_paddlepaddle_frontend.md +++ b/rfcs/add_paddlepaddle_frontend.md @@ -6,14 +6,14 @@ # Summary [summary]: #summary -Add a paddlepaddle frontend, enhance TVM's compatibility of deep learning frameworks, which support PaddlePaddle>=2.0 +Add a paddlepaddle frontend, enhancing TVM's compatibility for deep learning frameworks, which supports PaddlePaddle>=2.0 # Motivation [motivation]: #motivation -PaddlePaddle, an independent R&D deep learning platform in China, has been officially open-sourced to professional communities since 2016. It has been widely adopted by a wide range of sectors including manufacturing, agriculture, enterprise service, and so on while serving more than 2.3 million developers. With such advantages, PaddlePaddle has helped an increasing number of partners commercialize AI. +PaddlePaddle, an independent R&D deep learning platform in China, has been officially open-sourced to professional communities since 2016. It has been widely adopted by a wide range of sectors including manufacturing, agriculture, enterprise service, and so on while serving more than 2.3 million developers. With such advantages, PaddlePaddle has helped an increasing number of partners commercialize AI. -Currently, PaddlePaddle has built a prosperous technological ecology, there are more than 500 models developed by official organization or outside developers, including CV/NLP/OCR/Speech, for more details we can refer to the following links, +Currently, PaddlePaddle has built a prosperous technological ecology, there are more than 500 models developed by official organization or outside developers, covering CV/NLP/OCR/Speech, refer to the following links for more details, - [PaddlePaddle/models](https://github.com/PaddlePaddle/models) - [PaddleDetection](https://github.com/PaddlePaddle/PaddleDetection) @@ -23,7 +23,7 @@ Currently, PaddlePaddle has built a prosperous technological ecology, there are - [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) - [DeepSpeech](https://github.com/PaddlePaddle/DeepSpeech) -After upgrading to 2.0, PaddlePaddle supported imperative programming similar with PyTorch, but a mechanism of `Dynamic to Static` is provided, which can export PaddlePaddle model as graph representation and more friendly for deployment, the following example code shows how to export a PaddlePaddle model, +As of version 2.0, PaddlePaddle supports imperative programming like PyTorch. Furthermore, a mechanism of `Dynamic to Static` is provided to export PaddlePaddle a model in graph representation, which is more friendly for deployment. The following example code shows how to export a PaddlePaddle model, ``` import paddle @@ -34,24 +34,24 @@ input_spec = paddle.static.InputSpec( paddle.jit.save(model, "model/infer", input_spec=[input_spec]) ``` -PaddlePaddle's deployment is supported by Paddle Inference/Paddle Lite/OpenVINO/Tengine/Adlik now. We have noticed there are lots of developers convmodel to ONNX format for TVM's supporting, but only part of models can be converted due to the lack of ONNX operators. -Based on this background, we proposed this RFC addle frontend for TVM, improve usability and extend more models support for PaddlePaddle's users. +PaddlePaddle's deployment is supported by Paddle Inference/Paddle Lite/OpenVINO/Tengine/Adlik now. We noticed that there are lots of developers converting models to ONNX format for the compatibility with TVM, but only a limited number of models are convertible due to lack of ONNX operators. +Based on this background, we proposed this RFC PaddlePaddle frontend for TVM, improving usability for PaddlePaddle users and enhancing the compatibility between PaddlePaddle and TVM. # Guide-level explanation [guide-level-explanation]: #guide-level-explanation -If you dive into the pull request code, there's 2 concepts imported from PaddlePaddle you may want to know, -- `paddle.jit.load`: Recommended API to load exported inference model, the type of return result is `TranslatedLayer`, stores `Program`(similar with computation graph) and parameters; -- `paddle.static.load_inference_model`: API to compatible with old version PaddlePaddle's model, the type of return result is `Program`, and all the parameters save in `Scope`, for the default situation, we can extract the parameters from the `paddle.fluid.global_scope()`. +If you dive into the pull request code, there are 2 concepts imported from PaddlePaddle that you may want to know, +- `paddle.jit.load`: Recommended API to load an exported inference model, the type of the return value is `TranslatedLayer`, storing `Program`(similar to computation graph) and parameters; +- `paddle.static.load_inference_model`: API compatible with older version PaddlePaddle models, the type of the return value is `Program`. All the parameters are saved in `Scope` by default, parameters can be extracted from the `paddle.fluid.global_scope()`. -So, this RFC also will bring a new API for TVM to support PaddlePaddle model, +Therefore, this RFC will also add a new API to TVM to support PaddlePaddle models, ``` relay.frontend.from_paddle(program_or_layer, shape_dict=None, scope=None) ``` -- `program_or_layer`: the return result of `paddle.static.load_inference_model` or `paddle.jit.load` -- `shape_dict`: optional parameter, input shapes of the model -- `scope`: optional parameter, only available if `model` is loaded by `paddle.static.load_inference_model` +- `program_or_layer`: the return value of `paddle.static.load_inference_model` or `paddle.jit.load` +- `shape_dict`: optional, input shapes of the model +- `scope`: optional, which is available only if `model` is loaded using `paddle.static.load_inference_model` The following example code shows how to import a PaddlePaddle model, ``` @@ -62,43 +62,43 @@ shape_dict = {'image': [1, 3, 224, 224]} mod, params = relay.frontend.from_paddle(model, shape_dict=shape_dict) ``` -Error may happened if there are some operators is not supported by this frontend, and the details will print out. +Errors may happen if there exist some operators in the model that are not supported by this frontend. If so, details will be printed out. # Reference-level explanation [reference-level-explanation]: #reference-level-explanation -Since this RFC is to add a new frontend, PaddlePaddle model will be converted to TVM Relay IR, so all the other features will not be affected. +Since this RFC only aims to add a new frontend for converting PaddlePaddle models to TVM Relay IR, no other features will be affected. -For this proposed RFC, the whole process of PaddlePaddle frontend importing can be divided into 2 steps: -- 1. Reading PaddlePaddle Model: The frontend supports PaddlePaddle's inference model format which is exported as graph based model by PaddlePaddle's `Dynamic to Static` mechanism, The model contains 2 files that store the model structure and parameters separately, We use `paddle.jit.load` to load the model files(For the compatibility of previous version of PaddlePaddle, `paddle.static.load_inference_model` also supported); -- 2. Operator Conversion: After the exported inference model is loaded, we will extract its parameters and convert operators one by one. Since all the operators can be iterated out by topo sorted, there's no need to worry about the order of converting operators. +In this proposed RFC, the whole process of PaddlePaddle frontend importing can be divided into 2 steps: +- 1. Reading a PaddlePaddle Model: The frontend supports models in PaddlePaddle's inference model format, which are exported as graph based models by PaddlePaddle's `Dynamic to Static` mechanism. The model contains 2 files storing the model structure and parameters respectively. We use `paddle.jit.load` to load the model files (For the compatibility with versions of PaddlePaddle below 2.0, `paddle.static.load_inference_model` is also supported); +- 2. Operators Conversion: After the exported inference model is loaded, we will extract its parameters and convert operators one by one. Since all the operators are transversed according to topological ordering, there's no need to worry about the order of converting the operators. # Drawbacks [drawbacks]: #drawbacks -This may bring more time cost of unit test running. +Potential increase in time-cost of unit tests. # Rationale and alternatives [rationale-and-alternatives]: #rationale-and-alternatives -The frontend of PaddlePaddle is similar with ONNX or TensorFlow. We support model loaded by `paddle.jit.load` and `paddle.static.load_inference_model`. Also we haved considered only support `paddle.jit.load` since this API is recommended after PaddlePaddle 2.0, but there are lots of users still use `paddle.static.load_inference_model`. -Currently, we have to convert PaddlePaddle model to ONNX format to make it work with TVM, but only part of models are supported due to the lack of ONNX operators and the operator difference. With a new PaddlePaddle frontend, we can support more operators and provide a better experience for TVM and PaddlePaddle's users. +The frontend of PaddlePaddle is similar to ONNX and TensorFlow. We support model loaded by `paddle.jit.load` and `paddle.static.load_inference_model`. We considered supporting `paddle.jit.load` only since this API is recommended as of PaddlePaddle 2.0, but there are lots of users still using older versions. Thus, supporting `paddle.static.load_inference_model` is still necessary. +Currently, we have to convert PaddlePaddle models to ONNX format to make them work with TVM, but only a limited number of models are supported due to the lack of ONNX operators and the operator differences. With a new PaddlePaddle frontend, we can support more operators and provide a better experience for TVM and PaddlePaddle's users. # Prior art [prior-art]: #prior-art -It's the first time we add a PaddlePaddle frontend to a ML compilers. +It's the first time we have added a PaddlePaddle frontend to an ML compiler. # Unresolved questions [unresolved-questions]: #unresolved-questions -We will add new unit test which will rely on PaddlePaddle framework, also the test will bring more cost time, if there's any problem, please let me know. +We will add new unit test cases that rely on PaddlePaddle framework, and this may increase time-cost of unit tests. If there are any problems, please let me know. # Future possibilities [future-possibilities]: #future-possibilities -For this RFC, we have made a established plan, +For this RFC, we have made an established plan, - About 200 operators will be supported in this quarter, such as deformable_conv/multiclass_nms -- Control flow operators will be supported in this year, mainly about while_loop/if -- Quantized model will be supported in this year, include quantized model from `PaddleDetection`/`PaddleClas`/`PaddleSeg` +- Control flow operators will be supported this year, mainly refer to while_loop/if +- Quantized model will be supported this year, including quantized model obtained from `PaddleDetection`/`PaddleClas`/`PaddleSeg` From ab275db8fb3ce4a33b702ebdfec0ab71c193d580 Mon Sep 17 00:00:00 2001 From: will-jl944 Date: Tue, 10 Aug 2021 16:55:53 +0800 Subject: [PATCH 09/10] refinement --- rfcs/add_paddlepaddle_frontend.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/rfcs/add_paddlepaddle_frontend.md b/rfcs/add_paddlepaddle_frontend.md index 29d72f36..118e58a2 100644 --- a/rfcs/add_paddlepaddle_frontend.md +++ b/rfcs/add_paddlepaddle_frontend.md @@ -23,7 +23,7 @@ Currently, PaddlePaddle has built a prosperous technological ecology, there are - [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) - [DeepSpeech](https://github.com/PaddlePaddle/DeepSpeech) -As of version 2.0, PaddlePaddle supports imperative programming like PyTorch. Furthermore, a mechanism of `Dynamic to Static` is provided to export PaddlePaddle a model in graph representation, which is more friendly for deployment. The following example code shows how to export a PaddlePaddle model, +As of version 2.0, PaddlePaddle supports imperative programming like PyTorch. Furthermore, a mechanism of `Dynamic to Static` is provided to export a PaddlePaddle model to graph representation, which is more friendly for deployment. The following example code shows how to export a PaddlePaddle model, ``` import paddle @@ -35,7 +35,7 @@ paddle.jit.save(model, "model/infer", input_spec=[input_spec]) ``` PaddlePaddle's deployment is supported by Paddle Inference/Paddle Lite/OpenVINO/Tengine/Adlik now. We noticed that there are lots of developers converting models to ONNX format for the compatibility with TVM, but only a limited number of models are convertible due to lack of ONNX operators. -Based on this background, we proposed this RFC PaddlePaddle frontend for TVM, improving usability for PaddlePaddle users and enhancing the compatibility between PaddlePaddle and TVM. +Based on this background, we proposed this RFC to add a PaddlePaddle frontend for TVM, improving usability for PaddlePaddle users and enhancing the compatibility between PaddlePaddle and TVM. # Guide-level explanation @@ -92,7 +92,7 @@ It's the first time we have added a PaddlePaddle frontend to an ML compiler. # Unresolved questions [unresolved-questions]: #unresolved-questions -We will add new unit test cases that rely on PaddlePaddle framework, and this may increase time-cost of unit tests. If there are any problems, please let me know. +We will add new unit test cases that rely on PaddlePaddle framework, and this may increase time-cost of unit tests. If there are any proslems, please let me know. # Future possibilities [future-possibilities]: #future-possibilities From e97ea04373e4e3276d89db88e35b628b0a5356d0 Mon Sep 17 00:00:00 2001 From: Jason <928090362@qq.com> Date: Wed, 11 Aug 2021 20:43:13 +0800 Subject: [PATCH 10/10] add demo code load model from dist --- rfcs/add_paddlepaddle_frontend.md | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-) diff --git a/rfcs/add_paddlepaddle_frontend.md b/rfcs/add_paddlepaddle_frontend.md index 118e58a2..c7575161 100644 --- a/rfcs/add_paddlepaddle_frontend.md +++ b/rfcs/add_paddlepaddle_frontend.md @@ -56,9 +56,19 @@ relay.frontend.from_paddle(program_or_layer, shape_dict=None, scope=None) The following example code shows how to import a PaddlePaddle model, ``` import paddle -model = paddle.jit.load('model/infer') +from tvm.contrib.download import download_testdata +from tvm.contrib.tar import untar +from tvm import relay -shape_dict = {'image': [1, 3, 224, 224]} +# Download PaddlePaddle ResNet50 model +model_url = 'https://bj.bcebos.com/x2paddle/models/paddle_resnet50.tar' +model_path = download_testdata(model_url, "paddle_resnet50.tar", "./") +untar(model_path, "./") + +# Load PaddlePaddle model +model = paddle.jit.load('paddle_resnet50/model') + +shape_dict = {'inputs': [1, 3, 224, 224]} mod, params = relay.frontend.from_paddle(model, shape_dict=shape_dict) ```