-
Notifications
You must be signed in to change notification settings - Fork 6.7k
Closed
Labels
bugSomething isn't workingSomething isn't working
Description
Describe the bug
I have used a simple realization on T4 (Google Cloud)
import torch
from torch import autocast
from diffusers import StableDiffusionPipeline
access_token = ""
pipe = StableDiffusionPipeline.from_pretrained(
"CompVis/stable-diffusion-v1-4",
revision="fp16",
torch_dtype=torch.float16,
use_auth_token=access_token,
)
pipe = pipe.to("cuda")
prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]
image.save("astronaut_rides_horse_1.png")
And generating can take 10-15 sec per image - it's good result.
But we have a problem with time before start of generating. Pre-loading models and files can take ~ 30 sec.
For example:

How we can speed up the preloading?
Reproduction
No response
Logs
No response
System Info
Google Cloud - T4
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working