Skip to content

a2jinhee/EMQNet

Repository files navigation

EMQNet

Setup

  1. Create a conda environment (or any env. of your preference).
  2. Below are some of the basic environment settings:
  • Python 3.8
  • pytorch==1.5.1
  • torchvision==0.6.1
  1. Refer to requirements.txt for full environment support.
  2. Also, set CUDA device in train.py.
  3. In config files, fill in your keys for WANDB support.

Directory structure

Config files (configs > *_cfg.py)

  • rn20_train_cfg.py: Training configuration for Any-Precision
    • Can experiment Bias Correction settings.
  • EMQ_cfg.py: Comment/Uncomment each code snippets for score evaluation and coreset training stage.

    Score evaluation cmd

    # ! python3 train_EMQ.py --cfg EMQ_cfg
    # ! python3 train_EMQ_batch_major.py --cfg EMQ_cfg
    DD_cfg_dict = {
        "dynamic": 1,
        "dynamics_path": "EMQ_npy/batch-124832/"
    }
    

    Coreset training cmd

    # ! python3 train_coreset.py --cfg EMQ_cfg
    DD_cfg_dict = {
        "general_pruning": 0.5,
        "pruning": 0.4,
        "dynamics_path": "EMQ_npy/batch-124832",
        "freq": 1,
        "init_temp": 0.5,
        "final_temp": 0.5,
        "temp_scheduler": 'linear',
    }
    
  • rn20_test_cfg.py: Testing configurations.

    Test cmd

    # ! python3 train_coreset.py --cfg EMQ_cfg 
    DD_cfg_dict = {
        "general_pruning": 0.5,
        "pruning": 0.8,
        "dynamics_path": "/root/EMQNet/EMQ_npy/EMQ-bit-core-batch/bn-bc",
        "freq": 1,
        "init_temp": 0.5,
        "final_temp": 1.0,
        "temp_scheduler": "log",
    }
    

Training files

  • train.py: Original training file for Any-Precision (Bit-wise training scheme)
  • Score evaluation files
    • train_EMQ.py: EMQ Score evaluation file (Bit-wise training scheme)
    • train_EMQ_batch_major.py: EMQ Score evaluation file (Batch-wise training scheme)
    • train_score.py: EL2N, Entropy, Forget Score evaluation file (Bit-wise training scheme)
  • Coreset training files
    • train_coreset.py: Coreset training file
    • train_el2n.py, train_entropy.py, train_forget.py: Coreset training files for baseline methods.

Train-Eval Flow

  1. Score evaluation
  • python3 train_EMQ_batch_major.py --cfg EMQ_cfg
  • ./configs/score_run.sh
  1. Coreset training
  • python3 train_coreset.py --cfg EMQ_cfg
  1. BN adaptation
  • python3 train_coreset.py --cfg EMQ_cfg
  • Uncomment below snippet:
# % ADAPTIVE BN
data_loader_iter = iter(data_loaders[32])
model = Adaptive_BN(model, data_loader_iter)

Misc. useful custom settings

Custom quanatization method

To add custom quantization method, define custom class in models/quatizer.py and use in config file.

# models/quantizer.py
class custom_quantization(torch.autograd.Function):
   # quantization logic
# config file
NN_cfg_dict = {
    "quant" : "custom_quantization"
}

Wandb

# config file
wandb_cfg_dict = {
    "wandb_enabled": True,
    "key": "YOUR_KEY",
    "entity": "YOUR_ENTITY",
    "project": "YOUR_PROJECT",
    "name": "WANDB_RUN_NAME",
}

If you want to name the wandb run, use name key. If this value is None, then run name will be wandb default name.

Use pretrained model from wandb

# config file
wandb_cfg_dict = {
    "pretrain": "wandb_run_path_to_download_pretrained_model",
}

If this value is None, then pretrained model will be loaded from NN_cfg_dict["pretrain"].

Use sweep

# config file
wandb_cfg_dict = {
    "sweep_enabled": True,
    "sweep_config": {
        "name": NN_cfg_dict["log_id"],
        "method": "grid",
        "metric": {"goal": "maximize", "name": "Best_score"}
    },
    "sweep_count": 10000,
    "sweep_id": None
}
NN_cfg_dict = {
    "weight_bit_width": ["1,2,3,4,5,6,7,8,32", "1,2,4,8"],
}

Create a list with the parameters you want to experiment with.
If you want to name the sweep, use name key in sweep_config. If this value is None, then name will be wandb default name.

Reference

Citation

If you find our study helpful, please cite our paper:

@article{kim2025efficient,
  title={Efficient Multi-bit Quantization Network Training via Weight Bias Correction and Bit-wise Coreset Sampling},
  author={Kim, Jinhee and An, Jae Jun and Jeon, Kang Eun and Ko, Jong Hwan},
  journal={arXiv preprint arXiv:2510.20673},
  year={2025}
}

About

Official implementation of EMQNet presented in NeurIPS 2025

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors