Skip to content
This repository was archived by the owner on Feb 7, 2025. It is now read-only.
This repository was archived by the owner on Feb 7, 2025. It is now read-only.

Allow to force to use full precision when computing attention #187

@Warvito

Description

@Warvito

As mentioned in Stable Diffusion v 2.1 (https://github.com/Stability-AI/stablediffusion/blame/main/README.md#L15), half precision in the attention computation might cause some instabilities. One thing that we could implement in our models to avoid this is the option to perform the attention with full attention or not

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions