Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
71 commits
Select commit Hold shift + click to select a range
7c722ba
Add supported kwargs to fixed_cross_entropy
Rocketknight1 Jan 13, 2026
afb3f23
make style
Rocketknight1 Jan 13, 2026
73b2f36
Refactor GPT-Neo to use @capture_outputs and @can_return_tuple decora…
Feb 15, 2026
0df1861
Add @merge_with_config_defaults and fix test for output_attentions re…
Feb 16, 2026
eaef822
init: Add files (v1)
harshaljanjani Feb 27, 2026
ddc1bd7
fix: Fix ci/circleci: check_repository_consistency
harshaljanjani Feb 27, 2026
85c7356
feat: Add support and test harness for all variants
harshaljanjani Mar 1, 2026
adc4079
fix: Fix ci/circleci: check_repository_consistency
harshaljanjani Mar 1, 2026
81a3d06
Merge branch 'main' into add-deimv2
harshaljanjani Mar 1, 2026
39d300e
refactor: Resolve review comments
harshaljanjani Mar 17, 2026
476d69f
Merge branch 'main' into add-deimv2
harshaljanjani Mar 19, 2026
4ad0dc5
refactor: Resolve second review round
harshaljanjani Mar 19, 2026
16f2d07
nit: Fix copyright year
harshaljanjani Mar 19, 2026
78eaf93
Merge branch 'main' into add-deimv2
harshaljanjani Mar 19, 2026
dbe577b
Merge branch 'main' into add-deimv2
harshaljanjani Mar 21, 2026
1259628
Merge branch 'main' into add-deimv2
harshaljanjani Mar 28, 2026
31ee908
refactor: Resolve third review round
harshaljanjani Mar 28, 2026
16cc6d4
fix AttributeError in _patch_mistral_regex for Mistral tokenizer
knQzx Mar 28, 2026
4a3a877
revert: Adhere to the pattern from yonigozlan
harshaljanjani Mar 29, 2026
558c2af
Merge branch 'main' into add-deimv2
harshaljanjani Mar 30, 2026
ada78bf
nit: Clarify the docstring
harshaljanjani Mar 30, 2026
496ce9c
refactor: Resolve fourth review round
harshaljanjani Mar 31, 2026
5a12a56
Merge branch 'main' into add-deimv2
harshaljanjani Mar 31, 2026
d202aca
Fix AttributeError in _patch_mistral_regex by removing .backend_token…
Apr 8, 2026
93b05c0
Add regression test for fix_mistral_regex=True patching code path
Apr 9, 2026
85b4079
Merge branch 'main' into add-deimv2
harshaljanjani Apr 16, 2026
422a440
refactor: Closing in on the final set of nits
harshaljanjani Apr 16, 2026
22be6ec
utils: stop crashing with KeyError when flash_attn is importable but …
SAY-5 Apr 20, 2026
f932158
Merge branch 'main' into add-deimv2
harshaljanjani Apr 20, 2026
b833ee3
fix: Resolve merge conflicts
harshaljanjani Apr 20, 2026
58a6424
fix: Add loss override and address nits
harshaljanjani Apr 21, 2026
7dd0fb1
nits: Fix minor issues
harshaljanjani Apr 22, 2026
943f4bb
fixup their init weights
vasqu Apr 22, 2026
6213518
Merge branch 'main' into add-deimv2
vasqu Apr 22, 2026
07e3831
[CB] Changes for long generation (#45530)
remi-or Apr 23, 2026
706acf5
Allow for registered experts from kernels hub (#45577)
winglian Apr 23, 2026
bd69ed2
[docs] multi-turn tool calling (#45554)
stevhliu Apr 23, 2026
8e64e53
[AMD CI] Fix expectations for Gemma3n (#45602)
Abdennacer-Badaoui Apr 23, 2026
0323898
fix transformers + torchao nvfp4 serialization (#45573)
vkuzo Apr 23, 2026
533c4e1
SonicMoe (#45433)
IlyasMoutawwakil Apr 23, 2026
1e071b2
Processing Utils: continue when content is a string (#45605)
RyanMullins Apr 23, 2026
57f9936
qa: bumped mlinter and allow local override (#45585)
tarekziade Apr 23, 2026
fb1f387
fix: Fix loss coupling issue
harshaljanjani Apr 23, 2026
3629f13
Merge branch 'main' into add-deimv2
harshaljanjani Apr 23, 2026
91904ac
Fix configuration reading and error handling for kernels (#45610)
hmellor Apr 23, 2026
5cf7951
fix: compute auxiliary losses when denoising is disabled in D-FINE (#…
Abineshabee Apr 23, 2026
16f3dde
Remove unnecessary generate warnings (#45619)
Cyrilvallez Apr 24, 2026
f0a5a1c
generate: drop stale num_return_sequences warning on continuous batch…
joaquinhuigomez Apr 24, 2026
a66638d
Skip failing offloading tests (#45624)
Cyrilvallez Apr 24, 2026
f0f456b
chore(qa): split pipeline and add type checking (#45432)
tarekziade Apr 24, 2026
23ca437
Allow more artifacts to be download in CI (#45629)
ydshieh Apr 24, 2026
622b8e9
chore: bump doc-builder SHA for main doc build workflow (#45631)
rtrompier Apr 24, 2026
678e871
CircleCI with torch 2.11 (#45633)
ydshieh Apr 24, 2026
c472755
Raise clear error for `problem_type="single_label_classification"` wi…
gaurav0107 Apr 24, 2026
47a512b
Fix xdist collisions for captured_info artifacts and preserve CI debu…
stationeros Apr 25, 2026
ded2b74
Add `supports_gradient_checkpointing` to `NemotronHPreTrainedModel` (…
sergiopaniego Apr 27, 2026
4f85f85
Fix whisper return language (#42227)
FredHaa Apr 27, 2026
d94ced8
Merge branch 'main' into main
stationeros Apr 27, 2026
c81eeec
Merge PR #44339: model: Add DEIMv2 to Transformers
evalstate Apr 27, 2026
a988888
Merge commit 'refs/mergeability/pr-43254' into merge-cluster-cluster-…
evalstate Apr 27, 2026
6ddb3cd
Merge commit 'refs/pull/45317/head' of https://github.com/huggingface…
evalstate Apr 27, 2026
44c68dc
Merge commit 'refs/pull/45086/head' of https://github.com/huggingface…
evalstate Apr 27, 2026
4bbd240
Merge commit 'refs/pull/45524/head' of https://github.com/huggingface…
evalstate Apr 27, 2026
9cb1a72
Merge PR #45645: Fix xdist captured_info collisions
evalstate Apr 27, 2026
63b8d18
Merge PR #44018: Refactor GPT-Neo output tracing
evalstate Apr 27, 2026
e28a543
Merge branch 'merge-cluster-cluster-43240-3-20260427115403' into supe…
evalstate Apr 27, 2026
ea4e0fa
Merge branch 'merge-cluster-cluster-45520-3-20260427115403' into supe…
evalstate Apr 27, 2026
b4b8ee0
Merge branch 'merge-cluster-cluster-45081-3-20260427115403' into supe…
evalstate Apr 27, 2026
793d57e
Merge branch 'merge-cluster-cluster-44018-2-20260427115403' into supe…
evalstate Apr 27, 2026
a54f010
Merge branch 'merge-cluster-cluster-41211-3-20260427115403' into supe…
evalstate Apr 27, 2026
c19b577
Merge branch 'merge-cluster-cluster-45561-3-20260427115403' into supe…
evalstate Apr 27, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions .github/workflows/build_documentation.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ on:

jobs:
build:
uses: huggingface/doc-builder/.github/workflows/build_main_documentation.yml@90b4ee2c10b81b5c1a6367c4e6fc9e2fb510a7e3 # main
uses: huggingface/doc-builder/.github/workflows/build_main_documentation.yml@2430c1ec91d04667414e2fa31ecfc36c153ea391 # main
with:
commit_sha: ${{ github.sha }}
package: transformers
Expand All @@ -23,7 +23,7 @@ jobs:
hf_token: ${{ secrets.HF_DOC_BUILD_PUSH }}

build_other_lang:
uses: huggingface/doc-builder/.github/workflows/build_main_documentation.yml@90b4ee2c10b81b5c1a6367c4e6fc9e2fb510a7e3 # main
uses: huggingface/doc-builder/.github/workflows/build_main_documentation.yml@2430c1ec91d04667414e2fa31ecfc36c153ea391 # main
with:
commit_sha: ${{ github.sha }}
package: transformers
Expand Down
14 changes: 9 additions & 5 deletions .github/workflows/check_failed_tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ jobs:
n_runners: ${{ steps.set-matrix.outputs.n_runners }}
process: ${{ steps.set-matrix.outputs.process }}
steps:
- uses: actions/download-artifact@v4
- uses: actions/download-artifact@v8
continue-on-error: true
with:
name: ci_results_${{ inputs.job }}
Expand Down Expand Up @@ -127,12 +127,14 @@ jobs:
image: ${{ inputs.docker }}
options: --gpus all --shm-size "16gb" --ipc host -v /mnt/cache/.cache/huggingface:/mnt/cache/
steps:
- uses: actions/download-artifact@v4
- uses: actions/download-artifact@v8
with:
name: ci_results_${{ inputs.job }}
path: /transformers/ci_results_${{ inputs.job }}

- uses: actions/download-artifact@v4
- uses: actions/download-artifact@v8
env:
ACTIONS_ARTIFACT_MAX_ARTIFACT_COUNT: 2000
with:
pattern: setup_values*
path: setup_values
Expand Down Expand Up @@ -255,12 +257,14 @@ jobs:
image: ${{ inputs.docker }}
options: --gpus all --shm-size "16gb" --ipc host -v /mnt/cache/.cache/huggingface:/mnt/cache/
steps:
- uses: actions/download-artifact@v4
- uses: actions/download-artifact@v8
with:
name: ci_results_${{ inputs.job }}
path: /transformers/ci_results_${{ inputs.job }}

- uses: actions/download-artifact@v4
- uses: actions/download-artifact@v8
env:
ACTIONS_ARTIFACT_MAX_ARTIFACT_COUNT: 2000
with:
pattern: new_failures_with_bad_commit_${{ inputs.job }}*
path: /transformers/new_failures_with_bad_commit_${{ inputs.job }}
Expand Down
13 changes: 12 additions & 1 deletion .github/workflows/model_jobs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -186,7 +186,18 @@ jobs:
env:
report_name_prefix: ${{ inputs.report_name_prefix }}
run: |
cat "/transformers/reports/${machine_type}_${report_name_prefix}_${matrix_folders}_test_reports/captured_info.txt"
shopt -s nullglob
captured_info_files=("/transformers/reports/${machine_type}_${report_name_prefix}_${matrix_folders}_test_reports"/captured_info*.txt)

if [ ${#captured_info_files[@]} -eq 0 ]; then
echo "No captured information files found."
exit 0
fi

for captured_info_file in "${captured_info_files[@]}"; do
echo "===== ${captured_info_file##*/} ====="
cat "$captured_info_file"
done

- name: Copy test_outputs.txt
if: ${{ always() }}
Expand Down
4 changes: 3 additions & 1 deletion .github/workflows/self-scheduled.yml
Original file line number Diff line number Diff line change
Expand Up @@ -601,7 +601,9 @@ jobs:
- name: Create output directory
run: mkdir warnings_in_ci

- uses: actions/download-artifact@v4
- uses: actions/download-artifact@v8
env:
ACTIONS_ARTIFACT_MAX_ARTIFACT_COUNT: 2000
with:
path: warnings_in_ci

Expand Down
4 changes: 3 additions & 1 deletion .github/workflows/slack-report.yml
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,9 @@ jobs:
# Security: checkout to the `main` branch for untrusted triggers (issue_comment, pull_request_target), otherwise use the specified ref
ref: ${{ (github.event_name == 'issue_comment' || github.event_name == 'pull_request_target') && 'main' || (inputs.commit_sha || github.sha) }}

- uses: actions/download-artifact@v4
- uses: actions/download-artifact@v8
env:
ACTIONS_ARTIFACT_MAX_ARTIFACT_COUNT: 2000

- name: Prepare some setup values
run: |
Expand Down
2 changes: 1 addition & 1 deletion docker/consistency.dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ ARG REF=main
RUN apt-get update && apt-get install -y time git g++ pkg-config make git-lfs
ENV UV_PYTHON=/usr/local/bin/python
RUN pip install uv && uv pip install --no-cache-dir -U pip setuptools GitPython
RUN uv pip install --no-cache-dir --upgrade 'torch<=2.10.0' 'torchaudio' 'torchvision' --index-url https://download.pytorch.org/whl/cpu
RUN uv pip install --no-cache-dir --upgrade 'torch<=2.11.0' 'torchaudio' 'torchvision' --index-url https://download.pytorch.org/whl/cpu
RUN uv pip install --no-cache-dir pypi-kenlm
RUN uv pip install --no-cache-dir "git+https://github.com/huggingface/transformers.git@${REF}#egg=transformers[quality,testing,torch-speech,vision]"
RUN git lfs install
Expand Down
2 changes: 1 addition & 1 deletion docker/custom-tokenizers.dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ RUN make install -j 10

WORKDIR /

RUN uv pip install --no-cache --upgrade 'torch<=2.10.0' --index-url https://download.pytorch.org/whl/cpu
RUN uv pip install --no-cache --upgrade 'torch<=2.11.0' --index-url https://download.pytorch.org/whl/cpu
RUN uv pip install --no-cache-dir --no-deps accelerate --extra-index-url https://download.pytorch.org/whl/cpu
RUN uv pip install --no-cache-dir "git+https://github.com/huggingface/transformers.git@${REF}#egg=transformers[ja,testing,sentencepiece,spacy,rjieba]" unidic unidic-lite
# spacy is not used so not tested. Causes to failures. TODO fix later
Expand Down
2 changes: 1 addition & 1 deletion docker/examples-torch.dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ USER root
RUN apt-get update && apt-get install -y --no-install-recommends libsndfile1-dev espeak-ng time git g++ cmake pkg-config openssh-client git-lfs ffmpeg curl
ENV UV_PYTHON=/usr/local/bin/python
RUN pip --no-cache-dir install uv && uv pip install --no-cache-dir -U pip setuptools
RUN uv pip install --no-cache-dir 'torch<=2.10.0' 'torchaudio' 'torchvision' 'torchcodec<=0.10.0' --index-url https://download.pytorch.org/whl/cpu
RUN uv pip install --no-cache-dir 'torch<=2.11.0' 'torchaudio' 'torchvision' 'torchcodec<=0.11.0' --index-url https://download.pytorch.org/whl/cpu
RUN uv pip install --no-deps timm accelerate --extra-index-url https://download.pytorch.org/whl/cpu
RUN uv pip install --no-cache-dir librosa "git+https://github.com/huggingface/transformers.git@${REF}#egg=transformers[sklearn,sentencepiece,vision,testing]" seqeval albumentations jiwer

Expand Down
2 changes: 1 addition & 1 deletion docker/exotic-models.dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ USER root
RUN apt-get update && apt-get install -y libsndfile1-dev espeak-ng time git libgl1 g++ tesseract-ocr git-lfs curl
ENV UV_PYTHON=/usr/local/bin/python
RUN pip --no-cache-dir install uv && uv pip install --no-cache-dir -U pip setuptools
RUN uv pip install --no-cache-dir 'torch<=2.10.0' 'torchaudio' 'torchvision' --index-url https://download.pytorch.org/whl/cpu
RUN uv pip install --no-cache-dir 'torch<=2.11.0' 'torchaudio' 'torchvision' --index-url https://download.pytorch.org/whl/cpu
RUN uv pip install --no-cache-dir --no-deps timm accelerate
RUN uv pip install -U --no-cache-dir pytesseract python-Levenshtein opencv-python nltk
# RUN uv pip install --no-cache-dir natten==0.15.1+torch210cpu -f https://shi-labs.com/natten/wheels
Expand Down
2 changes: 1 addition & 1 deletion docker/pipeline-torch.dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ USER root
RUN apt-get update && apt-get install -y --no-install-recommends libsndfile1-dev espeak-ng time git pkg-config openssh-client git ffmpeg curl
ENV UV_PYTHON=/usr/local/bin/python
RUN pip --no-cache-dir install uv && uv pip install --no-cache-dir -U pip setuptools
RUN uv pip install --no-cache-dir 'torch<=2.10.0' 'torchaudio' 'torchvision' 'torchcodec<=0.10.0' --index-url https://download.pytorch.org/whl/cpu
RUN uv pip install --no-cache-dir 'torch<=2.11.0' 'torchaudio' 'torchvision' 'torchcodec<=0.11.0' --index-url https://download.pytorch.org/whl/cpu
RUN uv pip install --no-deps timm accelerate --extra-index-url https://download.pytorch.org/whl/cpu
RUN uv pip install --no-cache-dir librosa "git+https://github.com/huggingface/transformers.git@${REF}#egg=transformers[sklearn,sentencepiece,vision,testing]"

Expand Down
2 changes: 1 addition & 1 deletion docker/torch-light.dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ USER root
RUN apt-get update && apt-get install -y --no-install-recommends libsndfile1-dev espeak-ng time git g++ cmake pkg-config openssh-client git-lfs ffmpeg curl
ENV UV_PYTHON=/usr/local/bin/python
RUN pip --no-cache-dir install uv && uv pip install --no-cache-dir -U pip setuptools
RUN uv pip install --no-cache-dir 'torch<=2.10.0' 'torchaudio' 'torchvision' 'torchcodec<=0.10.0' --index-url https://download.pytorch.org/whl/cpu
RUN uv pip install --no-cache-dir 'torch<=2.11.0' 'torchaudio' 'torchvision' 'torchcodec<=0.11.0' --index-url https://download.pytorch.org/whl/cpu
RUN uv pip install --no-deps timm accelerate --extra-index-url https://download.pytorch.org/whl/cpu
RUN uv pip install --no-cache-dir librosa "git+https://github.com/huggingface/transformers.git@${REF}#egg=transformers[sklearn,sentencepiece,vision,testing,tiktoken,num2words,video]"

Expand Down
2 changes: 2 additions & 0 deletions docs/source/en/_toctree.yml
Original file line number Diff line number Diff line change
Expand Up @@ -899,6 +899,8 @@
title: DAB-DETR
- local: model_doc/deformable_detr
title: Deformable DETR
- local: model_doc/deimv2
title: DEIMv2
- local: model_doc/deit
title: DeiT
- local: model_doc/depth_anything
Expand Down
65 changes: 65 additions & 0 deletions docs/source/en/model_doc/deimv2.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
<!--Copyright 2026 The HuggingFace Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.

⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.

-->
*This model was released on 2025-09-25 and added to Hugging Face Transformers on 2026-04-22.*

# DEIMv2

## Overview

DEIMv2 (DETR with Improved Matching v2) was proposed in [DEIMv2: Real-Time Object Detection Meets DINOv3](https://huggingface.co/papers/2509.20787) by Shihua Huang, Yongjie Hou, Longfei Liu, Xuanlong Yu, and Xi Shen.

The abstract from the paper is the following:

*Driven by the simple and effective Dense O2O, DEIM demonstrates faster convergence and enhanced performance. In this work, we extend it with DINOv3 features, resulting in DEIMv2. DEIMv2 spans eight model sizes from X to Atto, covering GPU, edge, and mobile deployment. For the X, L, M, and S variants, we adopt DINOv3-pretrained / distilled backbones and introduce a Spatial Tuning Adapter (STA), which efficiently converts DINOv3's single-scale output into multi-scale features and complements strong semantics with fine-grained details to enhance detection. For ultra-lightweight models (Nano, Pico, Femto, and Atto), we employ HGNetv2 with depth and width pruning to meet strict resource budgets. Together with a simplified decoder and an upgraded Dense O2O, this unified design enables DEIMv2 to achieve a superior performance-cost trade-off across diverse scenarios, establishing new state-of-the-art results. Notably, our largest model, DEIMv2-X, achieves 57.8 AP with only 50.3M parameters, surpassing prior X-scale models that require over 60M parameters for just 56.5 AP. On the compact side, DEIMv2-S is the first sub-10M model (9.71M) to exceed the 50 AP milestone on COCO, reaching 50.9 AP. Even the ultra-lightweight DEIMv2-Pico, with just 1.5M parameters, delivers 38.5 AP-matching YOLOv10-Nano (2.3M) with ~50% fewer parameters.*

## Usage

```python
from transformers import AutoImageProcessor, AutoModelForObjectDetection
from transformers.image_utils import load_image

url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = load_image(url)

image_processor = AutoImageProcessor.from_pretrained("harshaljanjani/DEIMv2_HGNetv2_N_COCO_Transformers")
model = AutoModelForObjectDetection.from_pretrained("harshaljanjani/DEIMv2_HGNetv2_N_COCO_Transformers", device_map="auto")

inputs = image_processor(images=image, return_tensors="pt").to(model.device)
outputs = model(**inputs)

results = image_processor.post_process_object_detection(
outputs, threshold=0.5, target_sizes=[image.size[::-1]]
)

for result in results:
for score, label, box in zip(result["scores"], result["labels"], result["boxes"]):
box = [round(i, 2) for i in box.tolist()]
print(f"Detected {model.config.id2label[label.item()]} with confidence {round(score.item(), 3)} at location {box}")
```

## Deimv2Config

[[autodoc]] Deimv2Config

## Deimv2Model

[[autodoc]] Deimv2Model
- forward

## Deimv2ForObjectDetection

[[autodoc]] Deimv2ForObjectDetection
- forward
16 changes: 8 additions & 8 deletions docs/source/en/modeling_rules.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,22 +13,22 @@ specific language governing permissions and limitations under the License.

# Model structure rules

Transformers enforces a set of static rules on every `modeling_*.py`, `modular_*.py`, and `configuration_*.py` file. The [mlinter](https://github.com/huggingface/transformers-mlinter) tool checks them as part of `make typing` and errors out if violations are found.
Transformers enforces a set of static rules on every `modeling_*.py`, `modular_*.py`, and `configuration_*.py` file. The [mlinter](https://github.com/huggingface/transformers-mlinter) package provides the checker engine, and the repository keeps its active rule set in `utils/rules.toml`. That local TOML lets us enable, disable, or tweak rules quickly without waiting for a new `transformers-mlinter` release.

These are the expected model conventions for adding or changing modeling code. They keep the codebase consistent and ensure compatibility with features like pipeline parallelism, device maps, and weight tying.

## Running the checker

`make typing` runs `mlinter` alongside the `ty` type checker. Run `mlinter` on its own with the following commands.
`make typing` runs `mlinter` alongside the `ty` type checker through the repo wrapper, so it picks up `utils/rules.toml`. Run the same wrapper directly with the following commands.

```bash
mlinter # check all modeling files
mlinter --changed-only # check only files changed vs origin/main
mlinter --list-rules # list all rules and their enabled status
mlinter --rule TRF001 # show built-in docs for a specific rule
python utils/check_modeling_structure.py # check all modeling files
python utils/check_modeling_structure.py --changed-only # check only files changed vs origin/main
python utils/check_modeling_structure.py --list-rules # list all rules and their enabled status
python utils/check_modeling_structure.py --rule TRF001 # show built-in docs for a specific rule
```

The `--changed-only` flag is the fastest option during development. It only checks the files you've modified relative to the main branch.
The `--changed-only` flag is the fastest option during development. It only checks the files you've modified relative to the main branch. If you invoke `mlinter` directly instead of the wrapper, pass `--rules-toml utils/rules.toml` so local overrides are applied.

## Fixing a violation

Expand All @@ -52,7 +52,7 @@ Use the rule ID to look up the fix in the [rules reference](#rules-reference). T

## Rules reference

Each rule below lists what it enforces and a diff showing the fix. Run `mlinter --rule TRF001` to see the built-in docs for any rule.
Each rule below lists what it enforces and a diff showing the fix. Run `python utils/check_modeling_structure.py --rule TRF001` to see the built-in docs for any rule with the repo's current rule set.

<!-- BEGIN RULES REFERENCE -->

Expand Down
Loading