upgrade lm_eval to 0.4.5#6533
Merged
facebook-github-bot merged 2 commits intogh/helunwencser/64/basefrom Oct 29, 2024
Merged
Conversation
[ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/6533
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 999c02d with merge base 2c32bf3 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
Contributor
Author
|
@helunwencser has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
malfet
approved these changes
Oct 28, 2024
guangy10
reviewed
Oct 28, 2024
| ): | ||
| device = "cuda" if torch.cuda.is_available() else "cpu" | ||
| super().__init__(device=device) | ||
| super().__init__(device=device, pretrained="gpt2") |
Contributor
There was a problem hiding this comment.
Okay, this hack will make the newer version happy by giving it a valid HF model_repo though it won't be used to eval at all. Maybe put a comment for this?
guangy10
approved these changes
Oct 28, 2024
We have been using a pretty old `lm_eval` version. This is blocking us from upgrading other libraries like `transformers` and blocking some others work. For example, #6489. In newer versions `lm_eval`, `pretrainedModel` becomes a required parameter. In 0.4.2, it defaults to `gpt2` if not provided. This PR upgrades our `lm_eval` version to the latest version 0.4.5 and set `pretrainedModel` to its original default value `gpt2`. Test Plan: Run eval before and after this PR. Make sure the perplexity number stays around the same. <img width="682" alt="Screenshot 2024-10-28 at 12 22 45 PM" src="https://github.com/user-attachments/assets/f7bccc55-ad5a-4f90-8eae-eefdd8e9997a"> Differential Revision: [D65079913](https://our.internmc.facebook.com/intern/diff/D65079913) [ghstack-poisoned]
Contributor
|
This pull request was exported from Phabricator. Differential Revision: D65079913 |
guangy10
approved these changes
Oct 29, 2024
helunwencser
added a commit
that referenced
this pull request
Oct 30, 2024
Pull Request resolved: #6533 We have been using a pretty old `lm_eval` version. This is blocking us from upgrading other libraries like `transformers` and blocking some others work. For example, #6489. In newer versions `lm_eval`, `pretrainedModel` becomes a required parameter. In 0.4.2, it defaults to `gpt2` if not provided. This PR upgrades our `lm_eval` version to the latest version 0.4.5 and set `pretrainedModel` to its original default value `gpt2`. Differential Revision: [D65079913](https://our.internmc.facebook.com/intern/diff/D65079913/) ghstack-source-id: 250754584 Co-authored-by: Lunwen He <lwhecser@gmail.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Stack from ghstack (oldest at bottom):
We have been using a pretty old
lm_evalversion. This is blocking us from upgrading other libraries liketransformersand blocking some others work. For example, #6489.In newer versions
lm_eval,pretrainedModelbecomes a required parameter. In 0.4.2, it defaults togpt2if not provided. This PR upgrades ourlm_evalversion to the latest version 0.4.5 and setpretrainedModelto its original default valuegpt2.Test Plan:

Run eval before and after this PR. Make sure the perplexity number stays around the same.
Differential Revision: D65079913