forked from invoke-ai/InvokeAI
-
Notifications
You must be signed in to change notification settings - Fork 0
Open
Labels
enhancementNew feature or requestNew feature or request
Description
Is there an existing issue for this?
- I have searched the existing issues
Contact Details
No response
What should this feature add?
By default most models run in VRAM, but VRAM is limited. I would like the option to run a model on the CPU. This would take the form of a slider button in the model manager's Default Settings tab which allows you to set, on a per-model basis, the option to "Run model on CPU". There could also be a default setting for this defined in invokeai/backend/model_manager/configs/base.py.
Alternatives
No response
Additional Content
No response
Copilot
Metadata
Metadata
Labels
enhancementNew feature or requestNew feature or request