Skip to content

Update min_lr and max_lr default values to better defaults#45168

Open
w601sxs wants to merge 1 commit intohuggingface:mainfrom
w601sxs:patch-1
Open

Update min_lr and max_lr default values to better defaults#45168
w601sxs wants to merge 1 commit intohuggingface:mainfrom
w601sxs:patch-1

Conversation

@w601sxs
Copy link
Copy Markdown

@w601sxs w601sxs commented Apr 1, 2026

Based on our experimentation min and max lr for LLMs need to be set properly as defaults. Please refer to paper. For the broader community 1e-7 to 1e-4 are decent defaults

What does this PR do?

Fixes # (issue) Very high max lr and not low enough min lr defaults

Code Agent Policy

The Transformers repo is currently being overwhelmed by a large number of PRs and issue comments written by
code agents. We are currently bottlenecked by our ability to review and respond to them. As a result,
we ask that new users do not submit pure code agent PRs at this time.
You may use code agents in drafting or to help you diagnose issues. We'd also ask autonomous "OpenClaw"-like agents
not to open any PRs or issues for the moment.

PRs that appear to be fully agent-written will probably be closed without review, and we may block users who do this
repeatedly or maliciously.

This is a rapidly-evolving situation that's causing significant shockwaves in the open-source community. As a result,
this policy is likely to be updated regularly in the near future. For more information, please read CONTRIBUTING.md.

  • I confirm that this is not a pure code agent PR.

Before submitting

  • [ X ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • [ X ] Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a Github issue or the forum? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines, and
    here are tips on formatting docstrings.
  • [ X ] Did you write any new necessary tests?

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

Based on our experimentation min and max lr for LLMs need to be set properly as defaults. Please refer to paper. For the broader community 1e-7 to 1e-4 are decent defaults
@Rocketknight1
Copy link
Copy Markdown
Member

Agree that 1.0 seems like a very large LR! 1e-4 seems low for a max, though, can you give us the paper reference for GreedyLR's preferred range?

@w601sxs
Copy link
Copy Markdown
Author

w601sxs commented Apr 2, 2026

Agree that 1.0 seems like a very large LR! 1e-4 seems low for a max, though, can you give us the paper reference for GreedyLR's preferred range?

Thanks @Rocketknight1 - I am the first author of the paper. It depends on the kind of experiment and other settings (https://arxiv.org/pdf/2512.14527#page=13.19) . For smaller models and certain conditions we can go higher. For pretraining experiments in the paper we set the initial LR to 2e-4, with min_LR bound set to 10% of the initial LR (2e-5). In general across all types of models, we think the range from 1e-7 to 1e-4 by default gives good results. Also sharing some recent models and their LRs to justify the range (although they did not use greedyLR - either fixed LR, manually updated LR, used linear or cosine decay in the range):

1e-4 to 1e-7 should work generally, and users can update the min/max if required. Hope this helps!

@w601sxs
Copy link
Copy Markdown
Author

w601sxs commented Apr 6, 2026

Let us know what you think @Rocketknight1 ^

Copy link
Copy Markdown
Member

@Rocketknight1 Rocketknight1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Trusting you on the values, in that case, and happy to accept this PR!

@Rocketknight1
Copy link
Copy Markdown
Member

@w601sxs it seems like the style checkers are complaining, can you try make fix-repo and see if you can figure out what they're unhappy about? Once you get the CI green just ping me and I'll merge it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants