[PoC] HF exporters#41992
Conversation
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
|
Currently all models (except a select few) are tested and pass the tests successfully !
skipped tests either:
|
CI ResultsCommit Info
Model CI Report❌ 6 new failed tests from this PR 😭
|
|
View the CircleCI Test Summary for this PR: https://huggingface.co/spaces/transformers-community/circle-ci-viz?pr=41992&sha=f30fa9 |
|
run-slow: dia, efficientloftr, ernie4_5_vl_moe, falcon_mamba, flava, glm46v, glm4v, glm4v_moe, glm_image, glm_moe_dsa |
|
This comment contains models: ["models/dia", "models/efficientloftr", "models/ernie4_5_vl_moe", "models/falcon_mamba", "models/flava", "models/glm46v", "models/glm4v", "models/glm4v_moe", "models/glm_image", "models/glm_moe_dsa"] |
f30fa96 to
e44c5ae
Compare
|
run-slow: dia, efficientloftr, ernie4_5_vl_moe, falcon_mamba, flava, glm46v, glm4v, glm4v_moe, glm_image, glm_moe_dsa, glm_ocr |
|
This comment contains models: ["models/dia", "models/efficientloftr", "models/ernie4_5_vl_moe", "models/falcon_mamba", "models/flava", "models/glm46v", "models/glm4v", "models/glm4v_moe", "models/glm_image", "models/glm_moe_dsa", "models/glm_ocr"] |
|
[For maintainers] Suggested jobs to run (before merge) run-slow: dia, efficientloftr, ernie4_5_vl_moe, esm, falcon_mamba, flava, glm46v, glm4v, glm4v_moe, glm_image, glm_moe_dsa |
What does this PR do?
Edit: some PRs were opened taking pieces of this one, like #42697 and #42317 so now it's mostly about HfExporters 🤗
This is an attempt at standardizing native transformers support of an export backend (dynamo, onnx, executorch).
Motivation:
if not torch.compilers.is_exporting()in this PR). This means that if we were to transition in optimum-onnx/optimum-intel to dynamo export, we would have to rewrite/patch entire modules to avoid these errors. This PR suggests adding a native component in Transformers that handles the export process and is fully tested with all models to catch these modeling problems early on. It also gives users a friendly API to experiment with exporting freshly added models which are not yet supported in optimum-onnx. optimum-onnx will build on top of this API and be the place for seamless and easy end-to-end export, handling all the extra steps like generating the inputs, dynamic axes, splitting models (encoder-decoder, vlms), handling inference, etc.I started with the simplest models (encoders) then decoders (with pkv inputs/outputs) and now the integration works with almost all transformers models (including encoder-decoders and vlms) except a select few.
Before submitting
Pull Request section?
to it if that's the case.
documentation guidelines, and
here are tips on formatting docstrings.
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.