-
Notifications
You must be signed in to change notification settings - Fork 2.8k
Disable autocast for cpu to fix error. Remove unused precision arg. #518
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
This is tangential to this PR, but does this and other work in this repo mean that generation works on Intel CPUs (obviously very slowly), or is more work needed? |
|
This PR makes generation work for (intel) CPUs, yes. ~5mins for -s30 -Addim
for an i7 few year old laptop.
…On Mon, Sep 12, 2022 at 7:21 PM tildebyte ***@***.***> wrote:
This is tangential to this PR, but does this and other work in this repo
mean that generation works on Intel CPUs (obviously very slowly), or is
more work needed?
—
Reply to this email directly, view it on GitHub
<#518 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AACJAB4AU6OSBCYCVSKHSSDV55KCNANCNFSM6AAAAAAQKRK42Y>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
When running on just cpu (intel), a call to torch.layer_norm would error with RuntimeError: expected scalar type BFloat16 but found Float Fix buggy device handling in model.py. Tested with scripts/dream.py --full_precision on just cpu on intel laptop. Works but slow at ~10s/it.
lstein
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me. Tested and didn't find any problems on CUDA.
|
Only orig_scripts/ may have questionable uses of autocast left. I do have an idea for #526 that I'll comment there. |
…nvoke-ai#518) When running on just cpu (intel), a call to torch.layer_norm would error with RuntimeError: expected scalar type BFloat16 but found Float Fix buggy device handling in model.py. Tested with scripts/dream.py --full_precision on just cpu on intel laptop. Works but slow at ~10s/it.
…nvoke-ai#518) When running on just cpu (intel), a call to torch.layer_norm would error with RuntimeError: expected scalar type BFloat16 but found Float Fix buggy device handling in model.py. Tested with scripts/dream.py --full_precision on just cpu on intel laptop. Works but slow at ~10s/it.
…nvoke-ai#518) When running on just cpu (intel), a call to torch.layer_norm would error with RuntimeError: expected scalar type BFloat16 but found Float Fix buggy device handling in model.py. Tested with scripts/dream.py --full_precision on just cpu on intel laptop. Works but slow at ~10s/it.
autocast not supported on CPU pytorch/pytorch#55374 invoke-ai/InvokeAI#518
autocast not supported on CPU pytorch/pytorch#55374 invoke-ai/InvokeAI#518
autocast not supported on CPU pytorch/pytorch#55374 invoke-ai/InvokeAI#518
…nvoke-ai#518) When running on just cpu (intel), a call to torch.layer_norm would error with RuntimeError: expected scalar type BFloat16 but found Float Fix buggy device handling in model.py. Tested with scripts/dream.py --full_precision on just cpu on intel laptop. Works but slow at ~10s/it.
autocast not supported on CPU pytorch/pytorch#55374 invoke-ai/InvokeAI#518
autocast not supported on CPU pytorch/pytorch#55374 invoke-ai/InvokeAI#518
autocast not supported on CPU pytorch/pytorch#55374 invoke-ai/InvokeAI#518
autocast not supported on CPU pytorch/pytorch#55374 invoke-ai/InvokeAI#518
autocast not supported on CPU pytorch/pytorch#55374 invoke-ai/InvokeAI#518
autocast not supported on CPU pytorch/pytorch#55374 invoke-ai/InvokeAI#518
When running on just cpu (intel), a call to torch.layer_norm would error with "RuntimeError: expected scalar type BFloat16 but found Float"
Fix buggy device handling in model.py.
Tested with scripts/dream.py --full_precision on just cpu on intel laptop. Works but slow at ~10s/it.