Skip to content

Training on custom dataset #103

@bklynchconnect

Description

@bklynchconnect

Been exhausting a lot of options in trying to train with my own dataset (box and text). Looking at the saved images during training, I can see the output generally looks good but the bounding boxes aren't being used -- I would expect that the xxxxxxxx_real.png and xxxxxxxx.png images should essentially look the same or at least have the objects lined up in terms of position. Instead, it seems to just generate new images that match well in terms of overall appearance with the training data but no alignment of boxes.

Looking at how the model output images are generated, it uses the PLMSSampler but nowhere in the classes/methods is grounding_input used from the input dictionary. I know the result of the sample method is then passed to the autoencoder but I still don't understand how grounding_input is being actually used for inference (same goes for gligen_inference.py).

Hoping the authors or someone else can shed light on this. Thanks!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions