Our method obscures identity while preserving attributes such as gaze, expressions, and head pose (in contrast to Stable Diffusion Inpainting) and enables selective anonymization of specific facial regions.
To install all required dependencies, create a new conda environment using the provided environment.yml file:
conda env create -f environment.ymlThen activate the environment:
conda activate nullfaceWe include a sample image from the CelebA-HQ dataset in the my_dataset folder to demonstrate example usage. The hyperparameters specified below are the ones used in our experiments for comparison with baseline methods.
from anonymize_face import anonymize_face
output_img = anonymize_face(
image_path="my_dataset/images/00080.png",
mask_image_path="my_dataset/masks/00080/eyes_and_mouth.png",
sd_model_path="stable-diffusion-v1-5/stable-diffusion-v1-5",
insightface_model_path="~/.insightface",
device_num=0,
guidance_scale=10.0,
num_diffusion_steps=100,
eta=1.0,
skip=70,
ip_adapter_scale=1.0,
id_emb_scale=1.0,
output_log_file="log.txt",
det_thresh=0.1,
det_size=640,
seed=0,
mask_delay_steps=10,
)
if output_img:
output_path = "anonymized.png"
output_img.save(output_path)
print(f"Anonymized image saved to: {output_path}")
else:
print(
"Face could not be detected. Please check the output log file for more details."
)For the quantitative comparisons against baseline methods in our paper, we selected:
For each subject, we created corresponding segmentation masks to selectively control the visibility of the eye and mouth areas if desired.
The list of selected test subjects and their corresponding segmentation masks are available for download at the Hugging Face Hub.
This project is built upon Diffusers and DDPM inversion.