-
Notifications
You must be signed in to change notification settings - Fork 165
Description
Been exhausting a lot of options in trying to train with my own dataset (box and text). Looking at the saved images during training, I can see the output generally looks good but the bounding boxes aren't being used -- I would expect that the xxxxxxxx_real.png and xxxxxxxx.png images should essentially look the same or at least have the objects lined up in terms of position. Instead, it seems to just generate new images that match well in terms of overall appearance with the training data but no alignment of boxes.
Looking at how the model output images are generated, it uses the PLMSSampler but nowhere in the classes/methods is grounding_input used from the input dictionary. I know the result of the sample method is then passed to the autoencoder but I still don't understand how grounding_input is being actually used for inference (same goes for gligen_inference.py).
Hoping the authors or someone else can shed light on this. Thanks!