Skip to content

CUC-MIPG/IC-Effect

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

IC-Effect: Precise and Efficient Video Effects Editing via In-Context Learning

Yuanhang Li1, Yiren Song2, Junzhe Bai1, Xinran Liang1, Hu Yang3, Libiao Jin1, Qi Mao1,

1School of Information and Communication Engineering, Communication University of China 2ShowLab, National University of Singapore 3Baidu Inc.

Project Website arXiv HuggingFace HuggingFace

Abstract

TL; DR: IC-Effect is the first instruction-guied video VFX editing framework.

CLICK for the full abstract We propose IC-Effect, an instruction-guided, DiT-based framework for few-shot video VFX editing that synthesizes complex effects (e.g., flames, particles and cartoon characters) while strictly preserving spatial and temporal consistency. Video VFX editing is highly challenging because injected effects must blend seamlessly with the background, the background must remain entirely unchanged, and effect patterns must be learned efficiently from limited paired data. However, existing video editing models fail to satisfy these requirements. IC-Effect leverages the source video as clean contextual conditions, exploiting the contextual learning capability of DiT models to achieve precise background preservation and natural effect injection. A two-stage training strategy, consisting of general editing adaptation followed by effect-specific learning via EffectLoRA, ensures strong instruction following and robust effect modeling. To further improve efficiency, we introduce spatiotemporal sparse tokenization, enabling high fidelity with substantially reduced computation. We also release a paired VFX editing dataset spanning 15 high-quality visual styles. Extensive experiments show that IC-Effect delivers high-quality, controllable, and temporally consistent VFX editing, opening new possibilities for video creation.

💡 Changelog

  • 2026.1.29 Release Inference code, VideoEditor weights and Benchmark!
  • 2025.12.17 Release Project Page and Paper!

📑 Todo List:

  • Release Inference code
  • Release weights of Video-Editor
  • Release Benchmark
  • Release VFX Dataset
  • Release weights of Effect-LoRA
  • Release Training code

Getting Started with IC-Effect

1. Environment setup

git clone https://github.com/CUC-MIPG/IC-Effect.git
cd IC-Effect

conda create -n IC-Effect python=3.10
conda activate IC-Effect

2. Requirements installation

pip install requirements.txt

3. Download pre-trained models

We use Wan2.2-T2V-A14B as backbone, please download from HuggingFace or using huggingface-cli:

pip install "huggingface_hub[cli]"
huggingface-cli download Wan-AI/Wan2.2-I2V-A14B-Diffusers --local-dir ./Wan-AI/Wan2.2-I2V-A14B-Diffusers

You can download the VideoEditor trained checkpoints for the IC-Effect model from HuggingFace for common video editing.

4. Inference

python src/inference_2.py \
    --model_id [your_model_dir] \
    --ckpt_path [your_ckpt_dir] \
    --condition_path [you_video_path] \
    --prompts [you_edit_instruction] 

Citation

@article{li2025iceffect,
    title={IC-Effect: Precise and Efficient Video Effects Editing via In-Context Learning},
    author={Yuanhang Li and Yiren Song and Junzhe Bai and Xinran Liang and Hu Yang and Libiao Jin and Qi Mao},
    journal={arXiv preprint arXiv:2512.15635},
    year={2025}
  }

About

Official repo for paper "IC-Effect: Precise and Efficient Video Effects Editing via In-Context Learning"

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages