I just uploaded the sub-directory for generating the LTH-based prunned SNNs. I saw the latest HPCA2025 SNN accelerator work referred our pruning framework, so I think it's a good idea to equip this repo with the purning generation codes.
Please find the codes to prune the SNNs in the sub-directory of pruning_gen. I have tested the codes, there should be no issue to run the codes and generate the prunned SNNs. Please go inside the sub-directory to find more details. Please let me know if you need any help!
We just upload the sub-directory for the artifact evaluation. Feel free to go inside the sub-directory of artifact for more information!
We also provide the environment dependencies inside requirements.txt, generated by pipreqs.
To install the dependency: pip install -r requirements.txt
This repo intends to provide the source codes in PyTorch for fine-tuning and profiling the SNN models.
1a). Profiling the SNN models to examine the original ratio of silent neurons.
python3 model_profile.py -profile --n_mask 0
1b). Profiling the SNN models to examine the ratio of silent neurons by masking out all neurons that only spike for 1 time.
python3 model_profile.py -profile --n_mask 1
2). Finetuning the SNN models to recover the accuracy from masking out the neurons that only spike for 1 time.
python3 fine_tune.py --n_masks 1
Package version:
Python 3.9.7.
CUDA 11.1.
PyTorch 2.3.1 py3.9_cuda11.8_cudnn8.7.0_0
spikingjelly 0.0.0.0.12
More details to come soon.
If you find the above code is useful for your research, please use the following bibtex to cite us,
@inproceedings{yin2024loas,
title={LoAS: Fully Temporal-Parallel Dataflow for Dual-Sparse Spiking Neural Networks},
author={Yin, Ruokai and Kim, Youngeun and Wu, Di and Panda, Priyadarshini},
booktitle={2024 57th IEEE/ACM International Symposium on Microarchitecture (MICRO)},
pages={1107--1121},
year={2024},
organization={IEEE}
}