Skip to content

Fix: Cast to float before applying torch.finfo#1

Merged
dsnsabari merged 5 commits intomainfrom
new-fixes-patch2
Jul 10, 2025
Merged

Fix: Cast to float before applying torch.finfo#1
dsnsabari merged 5 commits intomainfrom
new-fixes-patch2

Conversation

@dsnsabari
Copy link
Copy Markdown
Owner

The CI is failing because the same torch.finfo() bug exists in multiple related models that share similar code. The fix needs to be applied consistently across:

qwen2_5_vl ✅ (already fixed)
qwen2_5_omni ❌ (needs same fix)
glm4v ❌ (needs same fix)

Action needed: Apply the same dtype checking fix to all affected models to ensure code consistency across the codebase.
Files to update:

src/transformers/models/qwen2_5_omni/modeling_qwen2_5_omni.py
src/transformers/models/glm4v/modeling_glm4v.py

This is a standard requirement in the Transformers codebase to maintain consistency when multiple models share similar patterns.

@dsnsabari dsnsabari merged commit 0ef332d into main Jul 10, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant