Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/6489
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: ❌ 1 New FailureAs of commit 51f64f3 with merge base d7826c8 ( NEW FAILURE - The following job has failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
091df31 to
8340c60
Compare
|
@guangy10 has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
8340c60 to
3161181
Compare
|
@guangy10 has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
|
Ok, the circular import issue is still valid. I will resolve it in a separate PR. |
db82e06 to
13aafa3
Compare
Pull Request resolved: #6533 We have been using a pretty old `lm_eval` version. This is blocking us from upgrading other libraries like `transformers` and blocking some others work. For example, #6489. In newer versions `lm_eval`, `pretrainedModel` becomes a required parameter. In 0.4.2, it defaults to `gpt2` if not provided. This PR upgrades our `lm_eval` version to the latest version 0.4.5 and set `pretrainedModel` to its original default value `gpt2`. Differential Revision: [D65079913](https://our.internmc.facebook.com/intern/diff/D65079913/) ghstack-source-id: 250754584 Co-authored-by: Lunwen He <lwhecser@gmail.com>
13aafa3 to
69b00ed
Compare
69b00ed to
51f64f3
Compare
|
@guangy10 has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
|
Seems to be a false alarm. Running |
We have been using a pretty old `lm_eval` version. This is blocking us from upgrading other libraries like `transformers` and blocking some others work. For example, pytorch/executorch#6489. In newer versions `lm_eval`, `pretrainedModel` becomes a required parameter. In 0.4.2, it defaults to `gpt2` if not provided. This PR upgrades our `lm_eval` version to the latest version 0.4.5 and set `pretrainedModel` to its original default value `gpt2`. Test Plan: Run eval before and after this PR. Make sure the perplexity number stays around the same. <img width="682" alt="Screenshot 2024-10-28 at 12 22 45 PM" src="https://github.com/user-attachments/assets/f7bccc55-ad5a-4f90-8eae-eefdd8e9997a"> Differential Revision: [D65079913](https://our.internmc.facebook.com/intern/diff/D65079913) [ghstack-poisoned]
Bump version to v4.46.0 where more
transformersmodels are compatible with ExecuTorch out-of-the-box.Consolidate to use a single version of
transformersin anywhere in the codebase, e.g.examples/,.workflow/However, we are hardcodelm_evalto a very old version0.4.2which is not compatible withtransformers >= 4.45(actually incompatible withtokenizers >= 0.20). It must be upgraded, but not in this PR. Since the eval is using the tinyllama in the CI, the version oftransformersdoesn't matter. The workaround is to force reinstall a lower version in the CI.