Deep unrolling for learning optimal spatially varying regularisation parameters for Total Generalised Variation
![]() |
![]() |
![]() |
![]() |
![]() |
| k-data | Zero-filled | Scalar TGV | U-TGV | Ground truth |
This repository contains the code for the paper "Deep unrolling for learning optimal spatially varying regularisation parameters for Total Generalised Variation" (https://arxiv.org/abs/2502.16532) by Thanh Trung Vu, Andreas Kofler, and Kostas Papafitsoros.
Please see the requirements.txt file for the required packages.
To create a new Python environment for this project, you can run the following commands:
# Create a new Python environment
python -m venv env_utgv
# Activate the environment
source env_utgv/bin/activate
# Install the required packages
pip install -r requirements.txtIf you have not installed the requirements:
# Assume you are in the root directory of the repository
# and, preferably, you are in a virtual environment.
pip install -r requirements.txtTo generate data [TODO: Add instructions to generate data]: The instructions to generate the data will be added soon.
Alternatively, you can download the data from the following link [TODO: Add link to the data]: We are waiting for permission from the owner to share the data. Once that is done, we will add the link to the data.
Start training the model: The model can be trained using the script train.py in the scripts/mri directory. For example, to train the model for MRI reconstruction using the TGV regularisation with the config as defined in the config/example_mri_tgv_config.yaml file on the CPU, you can run the following command:
# Assume you are in the root directory of the repository
# and you have installed the requirements.
python scripts/mri/train.py --application mri --config config/example_mri_tgv_config.yaml --device cpu
# Change to cuda for GPU or mps for Apple processors.
# Set `--uses_wandb True` to use Weights and Biases.There are different arguments that can be passed to the script. You can see the list of arguments by running:
python scripts/mri/train.py --helpHere is the list of arguments as of the time of writing:
--application {denoising,mri}
The application to run. Currently supporting 'denoising' and 'mri'.
--config CONFIG [Required] Path to the config file.
--output_dir OUTPUT_DIR
The output directory to store the `.pth` state dict file and other logs. If provided, overwrite the
config.
--device DEVICE The device to use for training. If provided, overwrite the config. Recommend 'cuda' if GPU is
available.
--uses_wandb USES_WANDB
Whether to use WandB for logging. Default to False.
--logs_local LOGS_LOCAL
Whether to log locally. If provided without value, save the config and other logs locally. If not
provided, still save the config and other logs locally by default. Need to explicitly set to False
to disable.
--savefile SAVEFILE The file to save the model state dict and config.
--loads_pretrained LOADS_PRETRAINED
Whether to load a pretrained model. Default to False.Evaluate the model [TODO: Convert notebooks to a script to evaluate the model and produce the results]: Download the pretrained models from [TODO: Add link to download the pretrained models]. The instructions to evaluate the model will be added soon.
You can also use the notebook test_example.ipynb
in scripts/mri to produce some example images.
We extend a recently introduced deep unrolling framework for learning spatially varying regularisation parameters in inverse imaging problems to the case of Total Generalised Variation (TGV). The framework combines a deep convolutional neural network (CNN) inferring the two spatially varying TGV parameters with an unrolled algorithmic scheme that solves the corresponding variational problem. The two subnetworks are jointly trained end-to-end in a supervised fashion and as such the CNN learns to compute those parameters that drive the reconstructed images as close to the ground truth as possible. Numerical results in image denoising and MRI reconstruction show a significant qualitative and quantitative improvement compared to the best TGV scalar parameter case as well as to other approaches employing spatially varying parameters computed by unsupervised methods. We also observe that the inferred spatially varying parameter maps have a consistent structure near the image edges, asking for further theoretical investigations. In particular, the parameter that weighs the first-order TGV term has a triple-edge structure with alternating high-low-high values whereas the one that weighs the second-order term attains small values in a large neighbourhood around the edges.
In inverse imaging problems, variational regularisation problems of the type
are widely used to compute an estimation
Here
In the case where
Small values of
For higher-order extensions of TV, especially those defined in an infimal convolution manner, the role and the interplay of spatially varying regularisation parameters on the resulting image quality and structure are not as straightforward. A prominent example is the Total Generalised Variation (TGV)
where
Here, we adapt the approach introduced in "Learning Regularization Parameter-Maps for Variational Image Reconstruction using Deep Neural Networks and Algorithm Unrolling" https://arxiv.org/abs/2301.05888 to compute spatially varying regularisation maps for TGV. It involves training a network that consists of two subnetworks in a supervised fashion. The first subnetwork is a deep convolutional neural network (CNN),
that takes as an input the data
Please open an issue or email ttv22@cam.ac.uk.
- Add description of the data: training, validation, and test data
- Add instructions to generate data
- Add link to download the data
- Add link to download the pretrained models
- Convert notebooks in
notebooks/mrito scripts to evaluate the model and produce the results (for example, thegenerate_results.ipynbnotebook) - Clean up the notebook
test_example.ipynb - Create readthedocs documentation
If you use the code for your work or if you found the code useful, please use the following BibTeX entry to cite the paper:
@article{...}




