Skip to content

Conversation

@pcuenca
Copy link
Member

@pcuenca pcuenca commented Jul 3, 2023

No description provided.

self.num_inference_steps = num_inference_steps

# "leading" and "trailing" corresponds to annotation of Table 1. of https://arxiv.org/abs/2305.08891
# "leading" and "trailing" corresponds to annotation of Table 2. of https://arxiv.org/abs/2305.08891
Copy link
Member Author

@pcuenca pcuenca Jul 3, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

missing linspace from ddim, will add later

dynamic_thresholding_ratio: float = 0.995,
clip_sample_range: float = 1.0,
sample_max_value: float = 1.0,
timestep_spacing: str = "leading",
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

default is leading here

if self.config.timestep_spacing == "linspace":
timesteps = np.linspace(0, self.config.num_train_timesteps - 1, num_inference_steps).round()[
::-1
].copy().astype(np.int64)
Copy link
Member Author

@pcuenca pcuenca Jul 3, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to be careful here as some schedulers use float and others use ints.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about performing the type-casting after all the conditions have been traversed?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good idea, I'll check it out.

trained_betas: Optional[Union[np.ndarray, List[float]]] = None,
use_karras_sigmas: Optional[bool] = False,
prediction_type: str = "epsilon",
timestep_spacing: str = "linspace",
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This one defaults to linspace

@pcuenca pcuenca changed the title Add timestep_spacing to DDPM, LMSDiscrete, PNDM. Add timestep_spacing to schedulers Jul 3, 2023
@pcuenca pcuenca marked this pull request as draft July 3, 2023 13:59
@HuggingFaceDocBuilderDev
Copy link

HuggingFaceDocBuilderDev commented Jul 3, 2023

The documentation is not available anymore as the PR was closed or merged.

@property
def init_noise_sigma(self):
# standard deviation of the initial noise distribution
if self.config.timestep_spacing == "linspace":
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this should be the case for trailing too.

Comment on lines +88 to +90
timestep_spacing (`str`, default `"leading"`):
The way the timesteps should be scaled. Refer to Table 2. of [Common Diffusion Noise Schedules and Sample
Steps are Flawed](https://arxiv.org/abs/2305.08891) for more information.
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note: steps_offset already existed here, I put timestep_spacing before it for consistency.

self._timesteps += self.config.steps_offset
# "linspace", "leading", "trailing" corresponds to annotation of Table 2. of https://arxiv.org/abs/2305.08891
if self.config.timestep_spacing == "linspace":
self._timesteps = np.linspace(0, self.config.num_train_timesteps - 1, num_inference_steps).round().astype(np.int64)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note they are not reversed in this case

@pcuenca
Copy link
Member Author

pcuenca commented Jul 3, 2023

One problem is that PNDMScheduler defaults to leading, but others default to linspace. So for example when changing an scheduler after loading the pipeline, this would behave differently as before:

import torch
from diffusers import DiffusionPipeline
from diffusers import HeunDiscreteScheduler

pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16)
pipe = pipe.to("cuda")

pipe.scheduler = HeunDiscreteScheduler.from_config(pipe.scheduler.config)
# Here we are using heun with `leading` instead of the previous default `linspace`

@patrickvonplaten how do you think we should deal with this?

Not sure the range is the way it was intended.
@patrickvonplaten
Copy link
Contributor

One problem is that PNDMScheduler defaults to leading, but others default to linspace. So for example when changing an scheduler after loading the pipeline, this would behave differently as before:

import torch
from diffusers import DiffusionPipeline
from diffusers import HeunDiscreteScheduler

pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16)
pipe = pipe.to("cuda")

pipe.scheduler = HeunDiscreteScheduler.from_config(pipe.scheduler.config)
# Here we are using heun with `leading` instead of the previous default `linspace`

@patrickvonplaten how do you think we should deal with this?

I thought about this too previously. Can we do the following here: We add a mechanism for values that are not loaded from the config: https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/scheduler/scheduler_config.json and then default to the value in the __init__ function that stores these as a list in the hidden_config_dict here, under "_use_default_values: List[Any]" and then pop those values from the config when loading it again?

pcuenca added 4 commits July 3, 2023 20:06
For params not present in the original configuration.

This makes it possible to switch pipeline schedulers even if they use
different timestep_spacing (or any other param).
pcuenca added 2 commits July 4, 2023 13:25
This test currently fails in main. When switching from DEIS to UniPC,
solver_type is "logrho" (the default value from DEIS), which gets
translated to "bh1" by UniPC. This is different to the default value for
UniPC: "bh2". This is where the translation happens: https://github.com/huggingface/diffusers/blob/36d22d0709dc19776e3016fb3392d0f5578b0ab2/src/diffusers/schedulers/scheduling_unipc_multistep.py#L171
@pcuenca pcuenca marked this pull request as ready for review July 4, 2023 12:59
@pcuenca
Copy link
Member Author

pcuenca commented Jul 4, 2023

This is ready for review now. I'm not fully sure that the leading and trailing implementation for DEIS, UniPC, DPMSolverSDE, DPMSolverMultiStep, DPMSolverSingleStep are as intended.

Fixes a bug when switching from UniPC from another scheduler (i.e.,
DEIS) that uses a different solver type. The solver is now the same as
if we had instantiated the scheduler directly.
@patrickvonplaten
Copy link
Contributor

PR looks great to me - however could we merge this into "main" and not the sd_xl branch so that in can go into this weeks' release?

@pcuenca
Copy link
Member Author

pcuenca commented Jul 4, 2023

Rebased in #3947.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants