Add ctsm model#45490
Open
kashif wants to merge 10 commits intohuggingface:mainfrom
Open
Conversation
Adds CTSM 1.0 (cisco-ai/cisco-time-series-model-1.0) as a first-class time-series foundation model. It is architecturally a TimesFM 2.0 decoder with multi-resolution inputs (coarse + learned special token + fine), rotary position embeddings, bidirectional attention over the coarse block, and 15-quantile prediction. - modular_ctsm.py reuses TimesFmAttention/DecoderLayer/Model and the TimesFm2_5 RoPE utilities so RoPE + per-dim Q scaling are shared. - CtsmModel.forward takes (past_values_coarse, past_values_fine) streams. CtsmModelForPrediction.forward takes a list of fine-res series and derives the coarse stream by mean-aggregation over agg_factor blocks, then runs an AR decode loop. - Registered in auto_mappings, MODEL_MAPPING, time-series-prediction mapping, models/__init__.py, _toctree.yml, and docs. - Tests mirror the timesfm2_5 pattern: full ModelTesterMixin coverage (with a custom eager-vs-SDPA equivalence that uses the native two-stream interface since CTSM builds its own mask). - Conversion script maps the fused qkv_proj + input/horizon residual blocks + multi_resolution / special_token / freq_emb to the transformers layout and has been verified end-to-end against the 250M Hub checkpoint.
The original CTSM reference normalizes each stream over the full non-padded context before the forward, then denormalizes the final prediction with the same stream stats. Inheriting TimesFM's first-patch normalization gives the same result mathematically (per-patch norm + denorm + stream norm + denorm is an identity over the extra factors), but sends inputs to the transformer in a different scale than what the checkpoint was trained on, and is less efficient. This replaces the per-first-patch `_forward_transform` step with a single stream-level `_normalize_with_pad` (matching `PatchedTSMultiResolutionDecoder` in the reference), returns stream stats as `CtsmOutput.loc/scale`, and lets `CtsmModelForPrediction._decode_step` denormalize in a single pass. Verified against the 250M hub checkpoint on the reference notebook datasets: cpu_util MAE model=2.11 naive_last=3.36 (~37% better) server_responsetime MAE model=0.65 naive_last=2.05 (~3x better) internet_traffic MAE model=805 naive_last=4071 (~5x better) Quantile predictions stay monotone; 95 tests still pass.
Each AR step recomputes the full forward by design: (1) coarse attention is bidirectional, so a new coarse patch invalidates every existing coarse K/V entry — the standard `DynamicCache.update(...)` append semantics can't express that; (2) stream normalization is recomputed per step over the raw context, which shifts every patch embedding. The original reference makes the same choice explicit (`CTSMAttentionRoPE` raises NotImplementedError on cache arguments), and it matches the convention of other time-series forecasters in transformers (TimesFM, TimesFM 2.5, PatchTST, Informer, Autoformer).
Rewrite the model doc to mirror the transformers model-doc template and pull content directly from the CTSM Technical Report (arXiv:2511.19841): - Full author list verified against the arXiv author list in order. - Quoted abstract. - Architecture section distinguishing the paper's 1.0-preview (500M, 50 layers, 9 quantiles, CPT from TimesFM 2.0) from the 1.0 release checkpoint actually on the Hub (250M, 25 layers, 15 quantiles, trained from scratch, adds RoPE, bidirectional coarse attention, short-context training). - Inference section noting the AR multi-resolution decode loop and why there is no KV cache. - Two usage snippets: auto-built coarse stream, and explicit (coarse, fine) pairs. - BibTeX citation using a BibTeX-safe form for the Yuhan Song entry (the parenthetical nickname in the paper parses oddly in BibTeX).
…n_mask CtsmModel inherits from TimesFmModel, which already provides a _prepare_4d_attention_mask(attention_mask, sequence_length, dtype, device, is_causal) static method combining padding + causal into a 4D additive mask. My _build_attention_mask was re-implementing the same logic (plus a one-line bidirectional-coarse zeroing), and _convert_paddings_to_attention_bias was duplicating the padding-to-bias conversion inside it. Replace both with a call to the inherited method + the single bidirectional patch. Numerically identical (cpu_util MAE 2.1093, same as before), 95 tests still pass.
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
CtsmOutputForPrediction inherits `loss` from TimesFmOutputForPrediction, but the @auto_docstring check requires every field of the dataclass to be documented in the class docstring. Add the missing `loss` entry and rerun the modular converter + ruff format so the generated file is in sync.
Mirrors TimesFmModel / TimesFm2_5Model: CtsmModel is the building block used by CtsmModelForPrediction, which is the only class in `all_model_classes` in the test file. Common tests exercise CtsmModel through the prediction wrapper; there is nothing to add to the test list.
For `horizon_len > config.horizon_length`, `CtsmModelForPrediction` now reuses a `DynamicCache` across autoregressive steps: - Step 1 runs a full forward over `[coarse, special, fine]` and populates the cache with K/V per layer. - Subsequent steps feed only the four new fine patches through the stack; their Q/K/V attend to `past_key_values.update(...)`-merged K/V. - Stream normalization stats are frozen to their step-1 values so cached embeddings stay on a consistent scale; the coarse block is pinned; if the cache would outgrow `max_position_embeddings` it's discarded and rebuilt from the current raw contexts. - `use_cache: bool | None` on `CtsmModelForPrediction.forward` lets callers force the old full-recompute path if they prefer. API additions mirror Llama et al.: - `CtsmAttention.forward(..., past_key_values=None)` - `CtsmDecoderLayer.forward(..., past_key_values=None)` - `CtsmModel.forward(..., past_key_values=None, use_cache=None, cache_position=None, loc_fine=None, scale_fine=None)` — when `past_key_values` is provided, `past_values_fine` must contain only the new fine values and `loc_fine` / `scale_fine` must be supplied so normalization matches the cached state. - `CtsmOutput.past_key_values` field. Benchmarks on the 250M hub checkpoint (CPU, horizon=512, cpu_utilization): use_cache=False 521 ms MAE=2.6852 use_cache=True 400 ms MAE=2.6852 MAE is bit-identical across the three notebook datasets. Added a `test_kv_cache_matches_full_recompute` regression test that verifies step-1 predictions are exact and subsequent AR steps stay within a generous bound on the tiny random-weights tester model.
Contributor
|
[For maintainers] Suggested jobs to run (before merge) run-slow: auto, ctsm |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What does this PR do?
Fixes # (issue)
Code Agent Policy
The Transformers repo is currently being overwhelmed by a large number of PRs and issue comments written by
code agents. We are currently bottlenecked by our ability to review and respond to them. As a result,
we ask that new users do not submit pure code agent PRs at this time.
You may use code agents in drafting or to help you diagnose issues. We'd also ask autonomous "OpenClaw"-like agents
not to open any PRs or issues for the moment.
PRs that appear to be fully agent-written will probably be closed without review, and we may block users who do this
repeatedly or maliciously.
This is a rapidly-evolving situation that's causing significant shockwaves in the open-source community. As a result,
this policy is likely to be updated regularly in the near future. For more information, please read
CONTRIBUTING.md.Before submitting
Pull Request section?
to it if that's the case.
documentation guidelines, and
here are tips on formatting docstrings.
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.