Skip to content

qq about code re porting to flax #35

@krahnikblis

Description

@krahnikblis

dude this is awesome! I've been messing with textual inversion for a while but it's not as precise as i want and this looks like the better way!
ok so i'd like to help extend this to the flax method which runs way faster than torch, on TPUs and even GPU, but since I'm not familiar with the dreambooth and automatic 111 codes, can you point me to the parts in the training script that you modified? or i guess i can just try to diff the repos... which was the starting one you forked from? any gotchas to watch for, or anyone already started on this front?
also i noticed the pytorch checkpoints have different weight/layer names, hoping anyone reading can point to how we can map across...

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions