Skip to content

fix: torch_float should return float, not int#44697

Open
LincolnBurrows2017 wants to merge 2 commits intohuggingface:mainfrom
LincolnBurrows2017:fix/torch_float
Open

fix: torch_float should return float, not int#44697
LincolnBurrows2017 wants to merge 2 commits intohuggingface:mainfrom
LincolnBurrows2017:fix/torch_float

Conversation

@LincolnBurrows2017
Copy link
Copy Markdown

Description

The torch_float function in src/transformers/utils/generic.py was incorrectly returning int(x) in two places where it should return float(x):

  1. When torch is not available (fallback case)
  2. When not in a tracing context with a torch tensor

This is inconsistent with the function's name and purpose (converting to float32). The similar function torch_int correctly returns int(x).

Fix

Changed int(x) to float(x) in both locations.


Bug originally identified during AI contribution task.

@github-actions
Copy link
Copy Markdown
Contributor

[For maintainers] Suggested jobs to run (before merge)

run-slow: doge

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks! I'm not sure about the other changes in this PR, but the fix for int() when it should be float() is correct. Surprised it went unnoticed for so long.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants