-
Notifications
You must be signed in to change notification settings - Fork 2.8k
Optimize and Improve GFPGAN and Real-ESRGAN Pipeline #162
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
@lstein Here you go. Let me know if you need anything else. |
|
@lstein I'd recommend merging this before other commits. Because for example the new server.py file utilizes GFPGAN in the older way which it should not be anymore. Those will need to get fixed again. Other pushes to dream.py are raising conflicts. |
|
@blessedcoolant It's back to generating two identical output lines for each requested image: It's not doing the work twice, so just duplicate lines in the results list. |
No worries. I'm aware and taking care of the conflicts. |
That's because the output is linked to the callback. The double lines only happen when you either upscale or face restore. This creates two images technically. You first create the SD image and then overwrite that file with the same name after the upscaling and restoration is done. As long as the logging is linked to the output, it'll show two lines. I'll do a patch fix for this later where I change that functionality. But that's unrelated to this PR. |
I understand it's just cosmetic, but I found the place in pngwriter where the second copy of the file is being added to results and conditioned it on the upscale option so that the second log line is only written when -save_orig is specified. |
Perfect. Thank you. The double line was bothering me a lot lol. |
|
@blessedcoolant I've merged, committed and pushed, but I want to make you aware that there is still a bug in how batches are processed. If I specify a single iteration and a batch size of 2 (-b2) I end up with the following:
This is without -save_orig, so actually I should be getting two retouched and upscaled images and seeing no originals. Batches are very problematic because I have never figured out how to control or recover the seed of the second and subsequent images. Therefore I let the ML inference code do its thing and then I tack on a version number to the first image's seed. This confuses me, and I'm sure it's confusing your code too. I think that most people use -n rather than -b, so I've let this go out with a known bug. But could you have a look? I may just remove the batch option anyway, as it uses a lot of VRAM and doesn't give you reusable seeds. |
|
@lstein Let me take a look at that. I've only ever used -n myself too because even in other repos that have SD optimized, they recommend usage of -n rather than -b because -b is just a memory hog. It's just practical to do more iterations rather than generating more samples per batch. Let me have a look and see if there's any reason to really retain -b for practical purposes. |
|
@lstein Turns out on an RTX 3080 8B card, I can't even do 2 batches. Maybe just nuke the option. -n does exactly the same more or less without asking me to sell my kidney to build more GPU's. |
Clean PR #102