Edit Transfer: Learning Image Editing via Vision In-Context Relations
Lan Chen, Qi Mao, Yuchao Gu and Mike Zheng Shou
MIPG, Communication University of China; Show Lab, National University of Singapore
git clone https://github.com/CUC-MIPG/Edit-Transfer.git
cd Edit-Transfer
conda create -n EditTransfer python=3.10
conda activate EditTransferpip install requirements.txtWe use the open-source AI-Toolkit to train EditTransfer. We provide training data with a configuration file in this repo:
- Configuration File:
config/edit_transfer.yml - Training Data:
data/edit_transfer.zip
You can start training by running:
python run.py config/edit_transfer.ymlYou can download the trained checkpoints of EditTransfer Model for inference: https://drive.google.com/file/d/1V4HraIjlMrbPfAPivk5vYoq4bQTzcP4L/view?usp=sharing
Once the training is done, replace file paths and run the following code:
python edit_transfer.py --model_dir [your_model_dir] --model_name [your_model_name] --img_path [your_img_file_path]--prompt_file [your_prompt_file_path]
@article{chen2025edit,
title={Edit Transfer: Learning Image Editing via Vision In-Context Relations},
author={Chen, Lan and Mao, Qi and Gu, Yuchao and Shou, Mike Zheng},
journal={arXiv preprint arXiv:2503.13327},
year={2025}
}


