Embedding VLMs don't need a head#45000
Conversation
|
run_slow: colpali, colqwen2 |
|
This comment contains models: ["models/colpali", "models/colqwen2"] |
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
CI ResultsCommit Info
Model CI Report❌ 1 new failed tests from this PR 😭
|
|
run-slow: colmodernvbert, colpali, colqwen2 |
|
This comment contains models: ["models/colmodernvbert", "models/colpali", "models/colqwen2"] |
CI ResultsCommit Info
Model CI Report❌ 7 new failed tests from this PR 😭
|
|
run-slow: colqwen2 |
|
[For maintainers] Suggested jobs to run (before merge) run-slow: colmodernvbert, colpali, colqwen2 |
|
This comment contains models: ["models/colqwen2"] |
* squash * fix copies * skip, we dont need to load base model for it * oops, one more regex since now we have no prefix
* squash * fix copies * skip, we dont need to load base model for it * oops, one more regex since now we have no prefix
What does this PR do?
As per title, after #44976 users will be seeing a
missing_weights - lm_head not founderror even though the model doesn't use an lm headOn the way also deleted unnecessary methods, which is same as base class