PyTorch implementation for All-In-One Image Restoration for Unknown Corruption (D$^3$Net)
You can install the environment dependencies for D$^3$net using the following command.
-
Clone repo
git clone https://github.com/Anonymous0515/D3Net cd D3Net -
Install dependent packages
pip install -r requirements.txt
You could find the dataset we used in the paper at following:
Deraining: Rain200L
Dehazing: RESIDE (OTS)
Deblurring: GoPro
Low-light enhancement: LOL
We provide the training code for D$^3$Net (used in our paper). You could improve it according to your own needs.
Procedures
-
Dataset preparation
-
Download training dataset. Put them in the
data/trainfolder. -
Training
If you want to train our model, you can use the following command:
CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.launch --nproc_per_node=1 train.py --epoch 0 --n_epochs 2000 --train_datasets your_Datasets --model_folder your_model_folderYou also can refer to more arguments in
tran.pyto modify the training command according to your needs. In addition, we default to training under five image degradation conditions. If you want to add more image degradations, please modifyutils/dataloader.py.
If you want to test our model and obtain PSNR and SSIM, you first need to fill in your_testImagePath, your_savePath, your_test_Epoch_Number, your_model_folder, and your_GT_ImagePath. Then, you can use the following command:
python test.py --image_path your_testImagePath --save_path your_savePath --epoch your_test_Epoch_Number --model_folder your_model_folder --target_data_dir your_GT_ImagePath
We borrow some code from basicSR, RDN, and HorNet. We gratefully thank the authors for their excellent works!