Skip to content

fix(models): Fix LayoutLMv2 NER crash and broken batched truncation/padding#44187

Merged
zucchini-nlp merged 1 commit intohuggingface:mainfrom
harshaljanjani:fix/layoutlmv2-ner-padding-truncation
Feb 23, 2026
Merged

fix(models): Fix LayoutLMv2 NER crash and broken batched truncation/padding#44187
zucchini-nlp merged 1 commit intohuggingface:mainfrom
harshaljanjani:fix/layoutlmv2-ner-padding-truncation

Conversation

@harshaljanjani
Copy link
Copy Markdown
Contributor

@harshaljanjani harshaljanjani commented Feb 20, 2026

What does this PR do?

The following issues were identified and fixed in this PR:

→ The NER/token classification issue and the downstream bug uncovered in the batched preprocessing use case with LayoutLMv2Tokenizer.
Reasoning: The NER use case makes it apparent that the error is hit at this line in LayoutLMv2Tokenizer, which is missing self.only_label_first_subword. PretrainedTokenizerBase doesn't create self.only_label_first_subword either, so this must be added along with the other custom attributes, and this directly resolved the NER use case as shown in the screenshot.
→ For the second fix; any padding="max_length" or truncation=True call without an explicit max_length arg compares self.model_max_length > LARGE_INTEGER (1e20), which in this case evaluates to True (since model_max_length falls back to VERY_LARGE_INTEGER), and both get translated into no-ops. Sequences in the batch are of different lengths and can't be tensorized, and the misleading ValueError tells the user to add padding=True and truncation=True, but they already did? The fix restores model_max_length=512. I confirmed both the base and large model configs have max_position_embeddings=512, so 512 as a default is correct, and followed the same pattern as MarianTokenizer and TapasTokenizer :)

Originally removed in #42894, just wanted to double-check if this was intentional; happy to adjust this fix if I’ve missed something :)

Fixes #44186.

Before both fixes applied:

4

Attribute fix resolves NER; batched use case still fails (feel free to cross-check; the errors are reproducible):

5

After both fixes applied; NER + batched use case work (feel free to cross-check):

6

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a Github issue or the forum? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines, and
    here are tips on formatting docstrings.
  • Did you fix any necessary existing tests?

@github-actions
Copy link
Copy Markdown
Contributor

[For maintainers] Suggested jobs to run (before merge)

run-slow: layoutlmv2

@harshaljanjani harshaljanjani marked this pull request as ready for review February 20, 2026 20:08
Copy link
Copy Markdown
Member

@zucchini-nlp zucchini-nlp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the detailed explanation, lgtm! Btw, @ArthurZucker , I think we should raise a warning/error asking to pass a max_length arg when model_max_length > VERY_LARGE_INT because the current error message is misleading

@zucchini-nlp
Copy link
Copy Markdown
Member

run-slow: layoutlmv2

@github-actions
Copy link
Copy Markdown
Contributor

Workflow Run ⚙️

This comment contains run-slow, running the specified jobs:

models: ["models/layoutlmv2"]
quantizations: []

@HuggingFaceDocBuilderDev
Copy link
Copy Markdown

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@github-actions
Copy link
Copy Markdown
Contributor

CI Results

Workflow Run ⚙️

Commit Info

Context Commit Description
RUN d4ac02a8 workflow commit (merge commit)
PR ff05e228 branch commit (from PR)
main df1cd3a7 base commit (on main)

✅ No failing test specific to this PR 🎉 👏 !

@zucchini-nlp zucchini-nlp merged commit a3dcad9 into huggingface:main Feb 23, 2026
20 checks passed
@harshaljanjani harshaljanjani deleted the fix/layoutlmv2-ner-padding-truncation branch February 23, 2026 10:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[BUG] LayoutLMv2Tokenizer crashes on NER inputs and batched padding/truncation

4 participants