Skip to content

Disabling safety-model or fixing false positives? #239

@bartekleon

Description

@bartekleon

I really have wanted to try this project so i tried using it with diffusers (default configurion for this one throws lack of memory and that one actually runs). I am getting "Potential NSFW content was detected in one or more images. A black image will be returned instead. Try again with a different prompt and/or seed." errpr all the time. With a default prompt. Or a "star" if "riding horse is NSFW". Is it possible to debug the runs it so the "handmade" safety feature is fixed or checked?
Or is there possibility to run this without this safety model or with it switched off?
Running default

import torch
from torch import autocast
from diffusers import StableDiffusionPipeline

pipe = StableDiffusionPipeline.from_pretrained(
    "./stable-diffusion-v1-4",
    revision="fp16", 
    torch_dtype=torch.float16
)
pipe = pipe.to("cuda")

prompt = "star"
with autocast("cuda"):
    image = pipe(prompt).images[0]  
image.save("star.png")

(tried seed 12345 if someone wants to try reproduce. Maybe my PC is not safe for work or something)
Thanks in advance

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions