Yihong Guo1
Youwei Lyu2
Jiajun Tang2
Yizhuo Zhou1
Hongliang Wang3
Jinwei Chen2
Changqing Zou1†
Qingnan Fan2
1Zhejiang University, 2vivo BlueImage Lab, 3University of Chinese Academy of Sciences
- Release inference code.
- 🔴 Release model weights.
- 🔴 Release iOS toy deployment.
# Clone the repository
git clone https://github.com/OpenVeraTeam/VeraRetouch.git
cd VeraRetouch
# Create and activate conda environment
conda create -n vera-retouch python=3.10
conda activate vera-retouch
pip install -r requirements.txtDownload our pretrained weights from HuggingFace. You can put the pretrained model to ./checkpoints
Our model supports three inference modes:
- Auto Retouch: Only an image is input.
python inference.py --mode auto \
--model-path ./checkpoints/VeraRetouch # the pretrained model path \
--img_paths ./data_samples/input/sample_flower.jpg # input image paths, multiple paths are supported \
--save_dir ./data_samples/output/ # output texts and images save path \
--chunk -1 # Enable when GPU memory is insufficient. The renderer will process large images in chunks. Recommended value: 262144 (512*512), enabling chunking will reduce inference speed. \
--batch_size 1 # Support batch inference- Style Retouch: An image and user prompt are input.
python inference.py --mode style \
--prompt "I want a dreamy bright pink style." # style user prompt(only 'style' mode used) \
--model-path ./checkpoints/VeraRetouch # the pretrained model path \
--img_paths ./data_samples/input/sample_flower.jpg # input image paths, multiple paths are supported \
--save_dir ./data_samples/output/ # output texts and images save path \
--chunk -1 # Enable when GPU memory is insufficient. The renderer will process large images in chunks. Recommended value: 262144 (512*512), enabling chunking will reduce inference speed. \
--batch_size 1 # Support batch inference- Param Retouch: An image and retouching operator parameters are input.
python inference.py --mode style \
--instruction_path ./data_samples/param.json # retourch operator parameters(only 'param' mode used) \
--model-path ./checkpoints/VeraRetouch # the pretrained model path \
--img_paths ./data_samples/input/sample_flower.jpg # input image paths, multiple paths are supported \
--save_dir ./data_samples/output/ # output texts and images save path \
--chunk -1 # Enable when GPU memory is insufficient. The renderer will process large images in chunks. Recommended value: 262144 (512*512), enabling chunking will reduce inference speed. \
--batch_size 1 # Support batch inferenceComming soon...