-
Notifications
You must be signed in to change notification settings - Fork 6.7k
[Flux Dreambooth LoRA] - te bug fixes & updates #9139
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Flux Dreambooth LoRA] - te bug fixes & updates #9139
Conversation
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
sayakpaul
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's add tests. No way otherwise, we can detect these. What say?
…dreambooth-lora # Conflicts: # examples/dreambooth/train_dreambooth_lora_flux.py
…dreambooth-lora # Conflicts: # examples/dreambooth/train_dreambooth_lora_flux.py
|
@Gothos could be because of some bug in the training, not sure if it's the loading because the |
|
|
That was meant to be a piece of info, though. If you find anything dodgy holler at me. |
Will do! |
|
I think it should be better now - with loss fix + memory usage is lower - @Gothos @arcanite24 if you want to give it a try |
|
Yep I can also confirm that it converges! |
| vae_scale_factor=vae_scale_factor, | ||
| ) | ||
|
|
||
| model_pred = model_pred * (-sigmas) + noisy_model_input |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Where did this go?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this was removed because we discarded of precondition_outputs ,
originaly it was
# Follow: Section 5 of https://arxiv.org/abs/2206.00364.
# Preconditioning of the model outputs.
if args.precondition_outputs:
model_pred = model_pred * (-sigmas) + noisy_model_input
and was left accidentally in the previous merge after we removed precondition_outputs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
see also #9086 (comment)
for more context
|
@linoytsaban let's maybe also include your additional results in the comments when they are available? I left one question on the changes and then we can merge I think. |
|
yarn art lora example (overfit a bit but nice vibe to it) |
|
@sayakpaul let's merge once tests pass? |
|
Thank you! |
* add requirements + fix link to bghira's guide * text ecnoder training fixes * text encoder training fixes * text encoder training fixes * text encoder training fixes * style * add tests * fix encode_prompt call * style * unpack_latents test * fix lora saving * remove default val for max_sequenece_length in encode_prompt * remove default val for max_sequenece_length in encode_prompt * style * testing * style * testing * testing * style * fix sizing issue * style * revert scaling * style * style * scaling test * style * scaling test * remove model pred operation left from pre-conditioning * remove model pred operation left from pre-conditioning * fix trainable params * remove te2 from casting * transformer to accelerator * remove prints * empty commit



Uh oh!
There was an error while loading. Please reload this page.