GrATSI introduces a framework for graph-based attributions in time series models, bridging saliency methods to graphs recovery. By using the post-hoc explainers to generate a structured graph, it enables understanding and evaluation of feature interaction and relevance in temporal modelling.
Clone and set up the environment:
git clone https://github.com/<your-username>/grats-xai.git
cd grats-xai
# Create conda environment
conda create -n grats python=3.9 -y
conda activate grats
# Install requirements
pip install -r requirements.txt
βββ configs/ # YAML configs for data generation & experiments
β βββ data_gen.yaml
βββ src/
β βββ datasets/ # Synthetic DBN generator
β β βββ synthetic_dbn.py
β βββ models/ # Simple baselines (e.g. LSTM)
β β βββ simple_lstm.py
β βββ explainers/ # Explainability methods (IG, TimeRISE, etc.)
β βββ evaluation/ # Metrics (infidelity, comprehensiveness, etc.)
βββ runs/ # Auto-saved experiments (ignored via .gitignore)
βββ README.md
All supported in the config
Quick check (debugging)
python scripts/pipeline.py --config configs/pipeline_quick.yaml
Complete Run
python scripts/pipeline.py --config configs/pipeline.yaml
Compile and aggregate the runs to visualize
python scripts/agg_results.py
Outputs are saved under:
runs/dbn_n{params}/
βββ train.pkl
βββ val.pkl
βββ plots/
Choose an explainer:
# Integrated Gradients
# TimeRISE
// TODO: Integrated Hessians DeepLIFT Timex++ etc.