Skip to content

Paligemma: fix generation with Gemma2#36044

Merged
ArthurZucker merged 4 commits intohuggingface:mainfrom
zucchini-nlp:paligemma-fix-kwargs
Feb 6, 2025
Merged

Paligemma: fix generation with Gemma2#36044
ArthurZucker merged 4 commits intohuggingface:mainfrom
zucchini-nlp:paligemma-fix-kwargs

Conversation

@zucchini-nlp
Copy link
Copy Markdown
Member

What does this PR do?

Fixes #36029 and adds tests for the model, imo we need tests with different LM backbone because Gemma-2 is special

This is a quick fix but I think we should make this kind of fix on LM work out-of-the-box, by adding it as kwargs for example. Most LMs accept loss_kwargs thus we can make all multimodal models also accept kwargs that are simply passed further to the LM. WDYT?

@zucchini-nlp zucchini-nlp added the for patch Tag issues / labels that should be included in the next patch label Feb 5, 2025
@HuggingFaceDocBuilderDev
Copy link
Copy Markdown

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Comment thread src/transformers/models/paligemma/modeling_paligemma.py Outdated
Copy link
Copy Markdown
Collaborator

@ArthurZucker ArthurZucker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we can just use kwargs no?

@zucchini-nlp
Copy link
Copy Markdown
Member Author

I think making it explicit that kwargs will be used by an only an LM was better

Copy link
Copy Markdown
Member

@Cyrilvallez Cyrilvallez left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fine with me! Thanks a lot!

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's say that an integration test is most welcome as well!

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, was quite low in priority for the patch so I decided to skip it for now :)

@ArthurZucker ArthurZucker merged commit 3dd1de3 into huggingface:main Feb 6, 2025
@ArthurZucker
Copy link
Copy Markdown
Collaborator

For transparency, this commit needs to be modified for the patch, only applying changes for PaliGemnma2

ArthurZucker pushed a commit that referenced this pull request Feb 6, 2025
* fix paligemma

* nit

* use `kwargs` in models that can load any LM

* update changes to only affect Paligenma
MekkCyber pushed a commit that referenced this pull request Feb 7, 2025
* fix paligemma

* nit

* use `kwargs` in models that can load any LM
elvircrn pushed a commit to elvircrn/transformers that referenced this pull request Feb 13, 2025
* fix paligemma

* nit

* use `kwargs` in models that can load any LM
sbucaille pushed a commit to sbucaille/transformers that referenced this pull request Feb 16, 2025
* fix paligemma

* nit

* use `kwargs` in models that can load any LM
@hiyouga
Copy link
Copy Markdown
Contributor

hiyouga commented Mar 24, 2025

Hi @zucchini-nlp @ArthurZucker , I find this PR can lead to an abnormal loss value if gradient accumulation is enabled. Initially reported in hiyouga/LlamaFactory#7443 , the trainer assumes these model accept loss kwargs because of the existence of lm_kwargs (1), but actually they do not (2), resulting in an unexpected loss value. This is also related to the gradient accumulation fix PR #34511 , cc @muellerzr

  1. https://github.com/huggingface/transformers/blob/v4.50.0/src/transformers/trainer.py#L633-L640
  2. https://github.com/huggingface/transformers/blob/v4.50.0/src/transformers/models/llava/modeling_llava.py#L460-L464

@hiyouga
Copy link
Copy Markdown
Contributor

hiyouga commented Mar 24, 2025

This is a quick fix but I think we should make this kind of fix on LM work out-of-the-box, by adding it as kwargs for example. Most LMs accept loss_kwargs thus we can make all multimodal models also accept kwargs that are simply passed further to the LM. WDYT?

Yeah, it is true that most LMs accept such kwargs, but multimodal models compute the loss themselves, unless we reuse the loss computed by the language model part.

@hiyouga
Copy link
Copy Markdown
Contributor

hiyouga commented Mar 24, 2025

A similar bug report regarding the gemma3 model: hiyouga/LlamaFactory#7416

@zucchini-nlp
Copy link
Copy Markdown
Member Author

Yeah, it is true that most LMs accept such kwargs, but multimodal models compute the loss themselves, unless we reuse the loss computed by the language model part.

Yep, I will look into that and find a better solution

@zucchini-nlp
Copy link
Copy Markdown
Member Author

@hiyouga sorry, just got to look in detail at the issue. Indeed this causes problems when accumulating grads. I think the best solution would be to make all VLMs (touched by this PR first and then others not touched also) compute loss with self.loss_function

Would you like to submit a PR for that?

@hiyouga
Copy link
Copy Markdown
Contributor

hiyouga commented Apr 21, 2025

@zucchini-nlp Hi, I'm currently deeply occupied with writing my thesis and really don't have much spare time at the moment. Unfortunately, I won't be able to submit the PR. Thanks for understanding!

@zucchini-nlp
Copy link
Copy Markdown
Member Author

@hiyouga yeah, no problem :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

for patch Tag issues / labels that should be included in the next patch

Projects

None yet

Development

Successfully merging this pull request may close these issues.

PaliGemma2 doesn't work with transformers v4.48.2

6 participants