Skip to content

issues about Evaluation performances  #18

@Ethereal0725

Description

@Ethereal0725

Hello, we used the model you provided to test on the test data you provided, and obtained the following results. The test model is SD=1.5.
Model | Dataset | FID | CLIP-Score | Paper
ControlNet | COCOSeg | 23.72 | 31.52
ControlNet | MultiGen-20M(Canny Edge) | 17.76 | 31.93
ControlNet | MultiGen-20M(Hed Edge) | 19.17 | 32.01
ControlNet | MultiGen-20M(LineArt Edge) | 16.92 | 32.15
ControlNet | MultiGen-20M(Depth Map) | 21.84 | 32.04
ControlNet++ | COCOSeg | 20.97 | 31.98
ControlNet++ | MultiGen-20M(Canny Edge) | 23.16 | 31.53
ControlNet++ | MultiGen-20M(Hed Edge) | 74.77 | 27.50
ControlNet++ | MultiGen-20M(LineArt Edge) | 14.16 | 31.91
ControlNet++ | MultiGen-20M(Depth Map) | 17.95 | 32.02
we observed a significant performance drop between the results obtained from our tests and those provided in your paper. Are there any specific steps or considerations we should be aware of to ensure accurate replication of your results?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions