Skip to content

Bump transformers from 4.41.2 to 4.50.0 in /onnxruntime/python/tools/transformers/models/stable_diffusion/requirements#24591

Merged
snnn merged 1 commit intomainfrom
dependabot/pip/onnxruntime/python/tools/transformers/models/stable_diffusion/requirements/transformers-4.50.0
Apr 29, 2025
Merged

Bump transformers from 4.41.2 to 4.50.0 in /onnxruntime/python/tools/transformers/models/stable_diffusion/requirements#24591
snnn merged 1 commit intomainfrom
dependabot/pip/onnxruntime/python/tools/transformers/models/stable_diffusion/requirements/transformers-4.50.0

Conversation

@dependabot
Copy link
Contributor

@dependabot dependabot bot commented on behalf of github Apr 29, 2025

Bumps transformers from 4.41.2 to 4.50.0.

Release notes

Sourced from transformers's releases.

Release v4.50.0

New Model Additions

Model-based releases

Starting with version v4.49.0, we have been doing model-based releases, additionally to our traditional, software-based monthly releases. These model-based releases provide a tag from which models may be installed.

Contrarily to our software-releases; these are not pushed to pypi and are kept on our GitHub. Each release has a tag attributed to it, such as:

  • v4.49.0-Gemma-3
  • v4.49.0-AyaVision

⚠️ As bugs are identified and fixed on each model, the release tags are updated so that installing from that tag always gives the best experience possible with that model.

Each new model release will always be based on the current state of the main branch at the time of its creation. This ensures that new models start with the latest features and fixes available.

For example, if two models—Gemma-3 and AyaVision—are released from main, and then a fix for gemma3 is merged, it will look something like this:

              o---- v4.49.0-Gemma-3 (includes AyaVision, plus main fixes)
            /                  \  
---o--o--o--o--o-- (fix for gemma3) --o--o--o main
       \          
        o---- v4.49.0-AyaVision

We strive to merge model specific fixes on their respective branches as fast as possible!

Gemma 3

image

Gemma 3 is heavily referenced in the following model-based release and we recommend reading these if you want all the information relative to that model.

The Gemma 3 model was proposed by Google. It is a vision-language model composed by a SigLIP vision encoder and a Gemma 2 language decoder linked by a multimodal linear projection.

It cuts an image into a fixed number of tokens same way as Siglip if the image does not exceed certain aspect ratio. For images that exceed the given aspect ratio, it crops the image into multiple smaller pacthes and concatenates them with the base image embedding.

One particularity is that the model uses bidirectional attention on all the image tokens. Also, the model interleaves sliding window local attention with full causal attention in the language backbone, where each sixth layer is a full causal attention layer.

Shield Gemma2

ShieldGemma 2 is built on Gemma 3, is a 4 billion (4B) parameter model that checks the safety of both synthetic and natural images against key categories to help you build robust datasets and models. With this addition to the Gemma family of models, researchers and developers can now easily minimize the risk of harmful content in their models across key areas of harm as defined below:

  • No Sexually Explicit content: The image shall not contain content that depicts explicit or graphic sexual acts (e.g., pornography, erotic nudity, depictions of rape or sexual assault).
  • No Dangerous Content: The image shall not contain content that facilitates or encourages activities that could cause real-world harm (e.g., building firearms and explosive devices, promotion of terrorism, instructions for suicide).
  • No Violence/Gore content: The image shall not contain content that depicts shocking, sensational, or gratuitous violence (e.g., excessive blood and gore, gratuitous violence against animals, extreme injury or moment of death).

We recommend using ShieldGemma 2 as an input filter to vision language models, or as an output filter of image generation systems. To train a robust image safety model, we curated training datasets of natural and synthetic images and instruction-tuned Gemma 3 to demonstrate strong performance.

... (truncated)

Commits

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot show <dependency name> ignore conditions will show all of the ignore conditions of the specified dependency
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    You can disable automated security fix PRs for this repo from the Security Alerts page.

Bumps [transformers](https://github.com/huggingface/transformers) from 4.41.2 to 4.50.0.
- [Release notes](https://github.com/huggingface/transformers/releases)
- [Commits](huggingface/transformers@v4.41.2...v4.50.0)

---
updated-dependencies:
- dependency-name: transformers
  dependency-version: 4.50.0
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
@dependabot dependabot bot added dependencies Pull requests that update a dependency file python Pull requests that update Python code labels Apr 29, 2025
@snnn
Copy link
Contributor

snnn commented Apr 29, 2025

/azp run Linux QNN CI Pipeline, Win_TRT_Minimal_CUDA_Test_CI, Windows ARM64 QNN CI Pipeline, Windows GPU Doc Gen CI Pipeline, Windows x64 QNN CI Pipeline

@azure-pipelines
Copy link

Azure Pipelines successfully started running 5 pipeline(s).

@snnn snnn merged commit e685f50 into main Apr 29, 2025
70 checks passed
@snnn snnn deleted the dependabot/pip/onnxruntime/python/tools/transformers/models/stable_diffusion/requirements/transformers-4.50.0 branch April 29, 2025 23:05
ankitm3k pushed a commit to intel/onnxruntime that referenced this pull request May 12, 2025
…transformers/models/stable_diffusion/requirements (microsoft#24591)

Bumps [transformers](https://github.com/huggingface/transformers) from
4.41.2 to 4.50.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/huggingface/transformers/releases">transformers's
releases</a>.</em></p>
<blockquote>
<h1>Release v4.50.0</h1>
<h2>New Model Additions</h2>
<h3>Model-based releases</h3>
<p>Starting with version v4.49.0, we have been doing model-based
releases, additionally to our traditional, software-based monthly
releases. These model-based releases provide a tag from which models may
be installed.</p>
<p>Contrarily to our software-releases; these are not pushed to pypi and
are kept on our GitHub. Each release has a tag attributed to it, such
as:</p>
<ul>
<li><code>v4.49.0-Gemma-3</code></li>
<li><code>v4.49.0-AyaVision</code></li>
</ul>
<p>⚠️ As bugs are identified and fixed on each model, the release tags
are updated so that installing from that tag always gives the best
experience possible with that model.</p>
<p>Each new model release will always be based on the current state of
the main branch at the time of its creation. This ensures that new
models start with the latest features and fixes available.</p>
<p>For example, if two models—Gemma-3 and AyaVision—are released from
main, and then a fix for gemma3 is merged, it will look something like
this:</p>
<pre><code> o---- v4.49.0-Gemma-3 (includes AyaVision, plus main fixes)
            /                  \  
---o--o--o--o--o-- (fix for gemma3) --o--o--o main
       \          
        o---- v4.49.0-AyaVision
</code></pre>
<p>We strive to merge model specific fixes on their respective branches
as fast as possible!</p>
<h3>Gemma 3</h3>
<p><img
src="https://github.com/user-attachments/assets/2b7f31b3-02bd-496a-9d4e-a1867bd6d9d4"
alt="image" /></p>
<p>Gemma 3 is heavily referenced in the following <a
href="https://github.com/huggingface/transformers/releases/tag/v4.49.0-Gemma-3">model-based
release</a> and we recommend reading these if you want all the
information relative to that model.</p>
<p>The Gemma 3 model was proposed by Google. It is a vision-language
model composed by a <a
href="https://huggingface.co/docs/transformers/model_doc/siglip">SigLIP</a>
vision encoder and a <a
href="https://huggingface.co/docs/transformers/model_doc/gemma_2">Gemma
2</a> language decoder linked by a multimodal linear projection.</p>
<p>It cuts an image into a fixed number of tokens same way as Siglip if
the image does not exceed certain aspect ratio. For images that exceed
the given aspect ratio, it crops the image into multiple smaller pacthes
and concatenates them with the base image embedding.</p>
<p>One particularity is that the model uses bidirectional attention on
all the image tokens. Also, the model interleaves sliding window local
attention with full causal attention in the language backbone, where
each sixth layer is a full causal attention layer.</p>
<ul>
<li>Gemma3 by <a
href="https://github.com/RyanMullins"><code>@​RyanMullins</code></a> in
<a
href="https://redirect.github.com/huggingface/transformers/issues/36658">#36658</a></li>
</ul>
<h3>Shield Gemma2</h3>
<p>ShieldGemma 2 is built on <a
href="https://ai.google.dev/gemma/docs/core/model_card_3">Gemma 3</a>,
is a 4 billion (4B) parameter model that checks the safety of both
synthetic and natural images against key categories to help you build
robust datasets and models. With this addition to the Gemma family of
models, researchers and developers can now easily minimize the risk of
harmful content in their models across key areas of harm as defined
below:</p>
<ul>
<li>No Sexually Explicit content: The image shall not contain content
that depicts explicit or graphic sexual acts (e.g., pornography, erotic
nudity, depictions of rape or sexual assault).</li>
<li>No Dangerous Content: The image shall not contain content that
facilitates or encourages activities that could cause real-world harm
(e.g., building firearms and explosive devices, promotion of terrorism,
instructions for suicide).</li>
<li>No Violence/Gore content: The image shall not contain content that
depicts shocking, sensational, or gratuitous violence (e.g., excessive
blood and gore, gratuitous violence against animals, extreme injury or
moment of death).</li>
</ul>
<p>We recommend using ShieldGemma 2 as an input filter to vision
language models, or as an output filter of image generation systems. To
train a robust image safety model, we curated training datasets of
natural and synthetic images and instruction-tuned Gemma 3 to
demonstrate strong performance.</p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/huggingface/transformers/commit/0b057e66b52556da3a1cbc29e2a98c0784ea9c33"><code>0b057e6</code></a>
fix import issue</li>
<li><a
href="https://github.com/huggingface/transformers/commit/26fbd6919af810bf508eaea8b9eb9dcee829e228"><code>26fbd69</code></a>
v 4.50.0</li>
<li><a
href="https://github.com/huggingface/transformers/commit/523f6e743c74ecea90d0c37a172c9819b5691a19"><code>523f6e7</code></a>
Fix: dtype cannot be str (<a
href="https://redirect.github.com/huggingface/transformers/issues/36262">#36262</a>)</li>
<li><a
href="https://github.com/huggingface/transformers/commit/3f9ff19b4ec7dcf4112225079f26ea756aafd211"><code>3f9ff19</code></a>
Minor Gemma 3 fixes (<a
href="https://redirect.github.com/huggingface/transformers/issues/36884">#36884</a>)</li>
<li><a
href="https://github.com/huggingface/transformers/commit/f94b0c59f20447c0e6bdb6d381ea014fa47ecac8"><code>f94b0c5</code></a>
Use <code>deformable_detr</code> kernel from the Hub (<a
href="https://redirect.github.com/huggingface/transformers/issues/36853">#36853</a>)</li>
<li><a
href="https://github.com/huggingface/transformers/commit/2638d54e7851f1323dc78a8b513b041835aba27b"><code>2638d54</code></a>
Gemma 3 tests expect greedy decoding (<a
href="https://redirect.github.com/huggingface/transformers/issues/36882">#36882</a>)</li>
<li><a
href="https://github.com/huggingface/transformers/commit/b8aadc31d56e49d8b9075e73e5c433f7c5b4e04b"><code>b8aadc3</code></a>
:red_circle: :red_circle: :red_circle: supersede paligemma forward to
shift p...</li>
<li><a
href="https://github.com/huggingface/transformers/commit/6321876b5bac106d7e7c84b53418ea31fe1d9754"><code>6321876</code></a>
add eustlb as an actor</li>
<li><a
href="https://github.com/huggingface/transformers/commit/94f487626a296deac0022dda6462c0d9f2336106"><code>94f4876</code></a>
[generate] model defaults being inherited only happens for newer models
(<a
href="https://redirect.github.com/huggingface/transformers/issues/36881">#36881</a>)</li>
<li><a
href="https://github.com/huggingface/transformers/commit/f19d018bfff1613ba05dcbf7e82c461d49aee73e"><code>f19d018</code></a>
Revert &quot;Update deprecated Jax calls (<a
href="https://redirect.github.com/huggingface/transformers/issues/35919">#35919</a>)&quot;
(<a
href="https://redirect.github.com/huggingface/transformers/issues/36880">#36880</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/huggingface/transformers/compare/v4.41.2...v4.50.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=transformers&package-manager=pip&previous-version=4.41.2&new-version=4.50.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/microsoft/onnxruntime/network/alerts).

</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

dependencies Pull requests that update a dependency file python Pull requests that update Python code

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant