forked from huggingface/diffusers
-
Notifications
You must be signed in to change notification settings - Fork 0
Upstream #3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Upstream #3
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
* Apply same ruff settings as in transformers See https://github.com/huggingface/transformers/blob/main/pyproject.toml Co-authored-by: Aaron Gokaslan <aaronGokaslan@gmail.com> * Apply new style rules * Style Co-authored-by: Aaron Gokaslan <aaronGokaslan@gmail.com> * style * remove list, ruff wouldn't auto fix. --------- Co-authored-by: Aaron Gokaslan <aaronGokaslan@gmail.com>
* Helper function to disable custom attention processors. * Restore code deleted by mistake. * Format * Fix modeling_text_unet copy.
…2804) * add: better warning messages when handling multiple conditioning. * fix: handling of controlnet_conditioning_scale
* add train_controlnet_flax --------- Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Workaround for saving dynamo-wrapped models. * Accept suggestion from code review Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com> * Apply workaround when overriding pipeline components. * Ensure the correct config.json is saved to disk. Instead of the dynamo class. * Save correct module (not compiled one) * Add test * style * fix docstrings * Go back to using string comparisons. PyTorch CPU does not have _dynamo. * Simple test for save_pretrained of compiled models. * Helper function to test whether module is compiled. --------- Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
…odel from checkpoint (#2768) * Allow user to disable SafetyChecker and enable dtypes if loading models from .ckpt or .safetensors * Fix Import sorting (Ruff error) * Get rid of the dtype convert method as it was implemented all along * Fix the docstring * Fix ruff formatting --------- Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Remove suggestion to use cuDNN benchmark in docs * removing the wrong line
* Remove duplicate sentence * format
…ly prompt_embeds (instead of always requiring a prompt) (#2842) Fix error 'required positional argument: prompt' when Legacy Inpaint is called only with prompt_embeds
Fix link to LoRA training guide
…ng Pipeline (#2809) * Change the docs to use the parent DiffusionPipeline class when loading a checkpoint using from_pretrained() instead of a child class (e.g. StableDiffusionPipeline) where possible. * Run make style to fix style issues. * Change more docs to use DiffusionPipeline rather than a subclass. --------- Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Add last_epoch arg to optimization.get_scheduler. Allows the specification of the index of the last epoch when resuming training.
…2853) Add warning in __init__ if user loads a checkpoint with pipeline.unet.config.in_channels other than 9.
Co-authored-by: njindal <njindal@adobe.com>
…roperly in `StableUnCLIPImg2ImgPipeline` (#2861) * improve stable unclip doc. * add: test to check if image_emebds None case is handled. * apply formatting/
Fix typos
Fix typos
Fix typos
…n JAX (#2859) * improve stable unclip doc. * feat: add streaming support to controlnet flax training script. * fix: CLI arg. * fix: torch dataloader shuffle setting. * fix: dataset length. * fix: wandb config. * fix: steps_per_epoch in the training loop. * add: entry about streaming in the readme * get column names from iterable dataset + fix final logging --------- Co-authored-by: yiyixuxu <yixu310@gmail.com>
* update performance tutorial * fix divs * oops forgot to close tag * apply feedback * apply feedback * apply feedback * align doc title
…docs (#2897) * improve stable unclip doc. * add: entry of StableUnCLIPPipeline to the docs * Apply suggestions from code review Co-authored-by: apolinario <joaopaulo.passos@gmail.com> --------- Co-authored-by: apolinario <joaopaulo.passos@gmail.com>
Co-authored-by: yiyixuxu <yixu310@gmail,com>
add --half to convert_original_stable_diffusion_to_diffusers.py
* img2img.multiple.controlnets.pipeline * remove comments --------- Co-authored-by: mishka <gartsocial@gmail.com>
* add load textual inversion embeddings draft * fix quality * fix typo * make fix copies * move to textual inversion mixin * make it accept from sd-concept library * accept list of paths to embeddings * fix styling of stable diffusion pipeline * add dummy TextualInversionMixin * add docstring to textualinversionmixin * add load textual inversion embeddings draft * fix quality * fix typo * make fix copies * move to textual inversion mixin * make it accept from sd-concept library * accept list of paths to embeddings * fix styling of stable diffusion pipeline * add dummy TextualInversionMixin * add docstring to textualinversionmixin * add case for parsing embedding from auto1111 UI format Co-authored-by: Evan Jones <evan.a.jones3@gmail.com> Co-authored-by: Ana Tamais <aninhamoraestamais@gmail.com> * fix style after rebase * move textual inversion mixin to loaders * move mixin inheritance to DiffusionPipeline from StableDiffusionPipeline) * update dummy class name * addressed allo comments * fix old dangling import * fix style * proposal * remove bogus * Apply suggestions from code review Co-authored-by: Sayak Paul <spsayakpaul@gmail.com> Co-authored-by: Will Berman <wlbberman@gmail.com> * finish * make style * up * fix code quality * fix code quality - again * fix code quality - 3 * fix alt diffusion code quality * fix model editing pipeline * Apply suggestions from code review Co-authored-by: Pedro Cuenca <pedro@huggingface.co> * Finish --------- Co-authored-by: Evan Jones <evan.a.jones3@gmail.com> Co-authored-by: Ana Tamais <aninhamoraestamais@gmail.com> Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by: Sayak Paul <spsayakpaul@gmail.com> Co-authored-by: Will Berman <wlbberman@gmail.com> Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
) * add use_karras_sigmas option thanks @Stax124 * fix sigma_min/max from scheduler.sigmas * add docstring * revert to use k_diffusion_model.sigma, to(device) * add integration test * make style
* fix slow tests * uP
* Remove suggestion to use cuDNN benchmark in docs * removing the wrong line * add support for embeds * fix line length
* modify intel opts inference script * modify readme * modify doc * fix some issues * reformat * reformat script * format issue * format issue
…2902) * [2884]: Fix cross_attention_kwargs in StableDiffusionImg2ImgPipeline * [Build Fix] * [Build Fix] --------- Co-authored-by: njindal <njindal@adobe.com>
speed up test
Also capitalization notebook provider name
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.