Skip to content

feat: make timesfm2_5 onnx export compatible#45233

Open
pdufour wants to merge 25 commits intohuggingface:mainfrom
pdufour:paul.dufour/feat/onnx-timesfm
Open

feat: make timesfm2_5 onnx export compatible#45233
pdufour wants to merge 25 commits intohuggingface:mainfrom
pdufour:paul.dufour/feat/onnx-timesfm

Conversation

@pdufour
Copy link
Copy Markdown

@pdufour pdufour commented Apr 4, 2026

What does this PR do?

This fixes an issues with the model that made it incompatible with exporting to onnx.

Specifically the following has been changed:

  1. if condition
if input_len < context_len:

Will give this error when you try to export the timesfm module through torch.onnx.export

Could not guard on data-dependent expression u0 < 16384

Solution: use a branch-free code block which onnx can export

  1. static batch size
    for ts in inputs: causes the batch size to be static when you export it via onnx

Solution: Allow a tensor input to _preprocess method / forward method

  1. _timesfm_moving_average doesn't support the above changes:

Solution: Allow a tensor input there as well

Test Plan

On main, create a test file in the repo root

text_onnx_export.py contents:
https://gist.github.com/pdufour/52c7fd722dbe8d60030469d4298779be

Run the export which will export the timesfm2_5 model to onnx

python3 -m venv venv
source venv/bin/activate
pip install -e ".[torch]"
pip install onnx onnxscript
python3 test_onnx_export.py

This gives you the following errors:

  1. Error about the if condition
<class 'torch.fx.experimental.symbolic_shapes.GuardOnDataDependentSymNode'>: Could not guard on data-dependent expression u0 < 16384 (unhinted: u0 < 16384).  (Size-like symbols: none)

consider using data-dependent friendly APIs such as guard_or_false, guard_or_true and statically_known_true.
Caused by: (src/transformers/models/timesfm2_5/modeling_timesfm2_5.py:705 in _preprocess)

This is the line:

if input_len < context_len:

Now checkout this branch
Run the export again:

python3 text_onnx_export.py

See that it passes!

Run a test:

Create a file test_onnx_infer.py
https://gist.github.com/pdufour/6071e44296c236b27250ef308f8e5273

Run it:

python3 -m venv .venv
source .venv/bin/activate
pip install onnxruntime
python3 test_onnx_infer.py

Code Agent Policy

The Transformers repo is currently being overwhelmed by a large number of PRs and issue comments written by
code agents. We are currently bottlenecked by our ability to review and respond to them. As a result,
we ask that new users do not submit pure code agent PRs at this time.
You may use code agents in drafting or to help you diagnose issues. We'd also ask autonomous "OpenClaw"-like agents
not to open any PRs or issues for the moment.

PRs that appear to be fully agent-written will probably be closed without review, and we may block users who do this
repeatedly or maliciously.

This is a rapidly-evolving situation that's causing significant shockwaves in the open-source community. As a result,
this policy is likely to be updated regularly in the near future. For more information, please read CONTRIBUTING.md.

  • I confirm that this is not a pure code agent PR.

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a Github issue or the forum? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines, and
    here are tips on formatting docstrings.
  • Did you write any new necessary tests?

Who can review?

@xenova @zucchini-nlp @Cyrilvallez @ArthurZucker @vasqu @NielsRogge

@pdufour pdufour force-pushed the paul.dufour/feat/onnx-timesfm branch from 4785c38 to b8c0e9d Compare April 8, 2026 01:25
@xenova
Copy link
Copy Markdown
Contributor

xenova commented Apr 9, 2026

Thanks for the PR! cc @IlyasMoutawwakil maybe to include in #41992?

@@ -565,16 +565,16 @@ def _preprocess(
input_ts, input_padding = [], []

for ts in inputs:
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does your fix also work for multiple batches? 👀 this line/for-loop suggests it will be fixed at the batch_size you exported with.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah this will still hardcode the batch size through the loop

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm yeah, i think I need to change the input to torch.Tensor, do you know another option?

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pdufour
Copy link
Copy Markdown
Author

pdufour commented Apr 10, 2026

@xenova @IlyasMoutawwakil updated the PR now so onnx export does not fix batch size, also fixed a couple other spots with simliar issues. Put it all in the PR description. Model looks like this now:

Was previously showing static value for batch but now it is dynamic:

image

@pdufour
Copy link
Copy Markdown
Author

pdufour commented Apr 22, 2026

@xenova @IlyasMoutawwakil any chance you have time this week to re-review? Thanks!

Comment on lines +611 to +627
if isinstance(inputs, torch.Tensor) and inputs.ndim == 2:
x = inputs[:, -context_len:]
num_front_pad = context_len - x.shape[1]
x = F.pad(x, (num_front_pad, 0))
padding = torch.cat(
[
torch.ones(x.shape[0], num_front_pad, dtype=x.dtype, device=x.device),
torch.zeros(
x.shape[0], context_len + self.horizon_len - num_front_pad, dtype=x.dtype, device=x.device
),
],
dim=1,
)
result = (x, padding)
else:
input_ts, input_padding = [], []
for ts in inputs:
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

unfortunately we can't have different paths for tensors and lists in the modeling code, what i suggest is to do something like:

if isisntance(obj, list):
    # turn into tensors
# do tensor processing

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@IlyasMoutawwakil updated now! Let me know if that is what you were thinking, I also added an optional past_observed_mask arg since IIRC you can't have variable-length sequences in a single batched tensor.

Let me know if I am wrong about that and I can change it!

Comment on lines +286 to +287
@require_torch
class TimesFm2_5ForwardInputVariantsTest(unittest.TestCase):
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think this test suite is unnecessary if we just switch to tensors in the above testing and explicitly asked users to use tensors if they ever pass lists. my reasoning is that tensor inputs are simply better in this cases.

@github-actions
Copy link
Copy Markdown
Contributor

[For maintainers] Suggested jobs to run (before merge)

run-slow: timesfm, timesfm2_5

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants