Skip to content

fix gemma4 gradient accumulation loss and last token incorrect labels#45354

Merged
Cyrilvallez merged 2 commits intohuggingface:mainfrom
winglian:fix-gemma4
Apr 10, 2026
Merged

fix gemma4 gradient accumulation loss and last token incorrect labels#45354
Cyrilvallez merged 2 commits intohuggingface:mainfrom
winglian:fix-gemma4

Conversation

@winglian
Copy link
Copy Markdown
Collaborator

@winglian winglian commented Apr 10, 2026

What does this PR do?

Gemma 4 was calculating the CE loss incorrectly and not handling gradient accumulation steps properly, leading to losses scaled up by the value of the gradient accumulation steps rather than letting the built in HF CE loss function handle it correctly with num_items_in_batch. Gemma 4's custom loss function was simply using attention mask to filter out tokens, but we already handle this via when labels are set to -100.

Additionally, when doing multimodal training, the loss often starts off at ~15. This is partly attributed to most multimodal datasets being out of distribution (I tested this by having Gemma 4 generate a response to a prompt and checking the loss for that and it was closs to a more normal ~1 loss. The second reason is that the logit/label shift and could result in the last token in a sequence trying to predict a token like a pad token and it wouldn't be masked out because the original loss function didn't handle this, but using properl -100 labels does.

Fixes # (issue)

Code Agent Policy

The Transformers repo is currently being overwhelmed by a large number of PRs and issue comments written by
code agents. We are currently bottlenecked by our ability to review and respond to them. As a result,
we ask that new users do not submit pure code agent PRs at this time.
You may use code agents in drafting or to help you diagnose issues. We'd also ask autonomous "OpenClaw"-like agents
not to open any PRs or issues for the moment.

PRs that appear to be fully agent-written will probably be closed without review, and we may block users who do this
repeatedly or maliciously.

This is a rapidly-evolving situation that's causing significant shockwaves in the open-source community. As a result,
this policy is likely to be updated regularly in the near future. For more information, please read CONTRIBUTING.md.

  • I confirm that this is not a pure code agent PR.

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a Github issue or the forum? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines, and
    here are tips on formatting docstrings.
  • Did you write any new necessary tests?

Who can review?

@ArthurZucker @Cyrilvallez @zucchini-nlp

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@HuggingFaceDocBuilderDev
Copy link
Copy Markdown

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@github-actions
Copy link
Copy Markdown
Contributor

[For maintainers] Suggested jobs to run (before merge)

run-slow: gemma3n, gemma4

Copy link
Copy Markdown
Member

@Cyrilvallez Cyrilvallez left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @winglian! Reflected the change in modular, and added gemma3n as well at the same time!

@Cyrilvallez Cyrilvallez enabled auto-merge April 10, 2026 09:51
@Cyrilvallez Cyrilvallez added this pull request to the merge queue Apr 10, 2026
Merged via the queue into huggingface:main with commit 47d7765 Apr 10, 2026
21 checks passed
sirzechs66 pushed a commit to sirzechs66/transformers that referenced this pull request Apr 18, 2026
…huggingface#45354)

* fix gemma4 gradient accumulation loss and last token incorrect labels

* modular + also gemma3n

---------

Co-authored-by: Cyril Vallez <cyril.vallez@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants