Skip to content

Conversation

@a-r-r-o-w
Copy link
Contributor

Fixes #12108

import torch
from diffusers import QwenImagePipeline, UniPCMultistepScheduler

pipe = QwenImagePipeline.from_pretrained("Qwen/Qwen-Image", torch_dtype=torch.bfloat16)
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config, prediction_type="flow_prediction", use_flow_sigmas=True)
pipe.to("cuda")
image = pipe("a cat holding a sign that says 'hello world'", num_inference_steps=30).images[0]
image.save("output.png")
output

The quality is not great, not sure why yet. Still needs debugging.

cc @Vargol

@a-r-r-o-w a-r-r-o-w requested a review from yiyixuxu August 9, 2025 23:43
@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@a-r-r-o-w
Copy link
Contributor Author

I think it's working:

import torch
from diffusers import QwenImagePipeline, UniPCMultistepScheduler

pipe = QwenImagePipeline.from_pretrained("Qwen/Qwen-Image", torch_dtype=torch.bfloat16)
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config, prediction_type="flow_prediction", use_flow_sigmas=True)
pipe.to("cuda")

prompt = """A coffee shop entrance features a chalkboard sign reading "Qwen Coffee 😊 $2 per cup," with a neon light beside it displaying "通义千问". Next to it hangs a poster showing a beautiful Chinese woman, and beneath the poster is written "π≈3.1415926-53589793-23846264-33832795-02384197". Ultra HD, 4K, cinematic composition."""
image = pipe(prompt, negative_prompt=" ", width=1664, height=928, num_inference_steps=30, generator=torch.Generator().manual_seed(42)).images[0]
image.save("output.png")
FM Euler (50 steps) UniPC (30 steps)

cc @asomoza in case you want to do some tests

@a-r-r-o-w
Copy link
Contributor Author

Gentle ping @yiyixuxu

@staturecrane
Copy link

Ping that having this support would be sooooo helpful!

@github-actions
Copy link
Contributor

github-actions bot commented Jan 9, 2026

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

@github-actions github-actions bot added the stale Issues that haven't received updates label Jan 9, 2026
@yiyixuxu yiyixuxu removed the stale Issues that haven't received updates label Jan 9, 2026
@yiyixuxu yiyixuxu requested a review from dg845 January 9, 2026 22:42
@yiyixuxu
Copy link
Collaborator

yiyixuxu commented Jan 9, 2026

@dg845 can you take a look here?

@dg845
Copy link
Collaborator

dg845 commented Jan 16, 2026

Hi @yiyixuxu, could you take a look at this PR? I think it is close to being merged.

@yiyixuxu
Copy link
Collaborator

thanks for worrking on this!
can you share an output comparison like #12109 (comment)

@dg845
Copy link
Collaborator

dg845 commented Jan 17, 2026

Script:

import torch
from diffusers import QwenImagePipeline, UniPCMultistepScheduler

device = "cuda:0"
pipe = QwenImagePipeline.from_pretrained("Qwen/Qwen-Image", torch_dtype=torch.bfloat16)
pipe.to(device)

prompt = """A coffee shop entrance features a chalkboard sign reading "Qwen Coffee 😊 $2 per cup," with a neon light beside it displaying "通义千问". Next to it hangs a poster showing a beautiful Chinese woman, and beneath the poster is written "π≈3.1415926-53589793-23846264-33832795-02384197". Ultra HD, 4K, cinematic composition."""
negative_prompt = " "
height = 928
width = 1664
seed = 42

image_euler = pipe(
    prompt=prompt,
    negative_prompt=negative_prompt,
    height=height,
    width=width,
    num_inference_steps=50,
    generator=torch.Generator().manual_seed(seed),
).images[0]

image_euler.save("qwen_euler_50_steps.png")

pipe.scheduler = UniPCMultistepScheduler.from_config(
    pipe.scheduler.config, prediction_type="flow_prediction", use_flow_sigmas=True
)

image_unipc = pipe(
    prompt=prompt,
    negative_prompt=negative_prompt,
    height=height,
    width=width,
    num_inference_steps=30,
    generator=torch.Generator().manual_seed(seed),
).images[0]

image_unipc.save("qwen_unipc_flow_30_steps.png")

Euler Flow Matching 50 Steps:

qwen_euler_50_steps

UniPC Multistep Flow 30 Steps:

qwen_unipc_flow_30_steps

The samples were generated on an A100.

Copy link
Collaborator

@yiyixuxu yiyixuxu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks!
I left one question, I think if we don't support custom timesteps the logic would look much better/simpler
but let me know what you think

device: Union[str, torch.device] = None,
mu: Optional[float] = None,
sigmas: Optional[List[float]] = None,
timesteps: Optional[List[float]] = None,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think maybe we don't need to support custom timesteps here
It was introduced in some pipeline for AYS and much less common use case

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Qwen Image and Chroma pipeline breaks using schedulers that enable flow matching by parameter.

6 participants