Skip to content

ICT-FinD-Lab/awesome-robust-graph-learning

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 
 
 

Repository files navigation

Awesome Robust Graph Learning Resources

Robust graph learning is to develop learning algorithms that maintain predictive accuracy and stability in the presence of structural noise, adversarial perturbations, and out-of-distribution (OOD) shifts.

This repository collects:

  1. Academic Papers
  2. Online Courses and Videos
  3. Graph Datasets
  4. Open-source and Commercial Libraries/Toolkits
  5. Key Conferences & Journals

More items will be added to the repository. Please feel free to suggest other key resources by opening an issue report, submitting a pull request, or dropping me an email @ (liuyang173@mails.ucas.edu.cn). Enjoy reading!


Table of Contents


1. Tutorials & Benchmarks

1.1. Benchmarks

Data Types Paper Title Venue Year Ref Materials
Graph Graph Robustness Benchmark: Benchmarking the Adversarial Robustness of Graph Machine Learning NeurIPS 2021 [1] [PDF], [Code]

1.2. Tutorials

Tutorial Title Venue Year Ref Materials
Robust Graph Learning on the Web: Challenges, Methods, and Applications WWW 2026 [2] [HTML]

2. Toolbox & Datasets


3. Papers

3.1. Graph Adversarial Attack

Category Paper Title Venue Year Ref Materials
Evasion Attack Adversarial Attacks on Neural Networks for Graph Data KDD 2018 [3] [PDF], [Code]
Evasion Attack Indirect Adversarial Attacks via Poisoning Neighbors for Graph Convolutional Networks BigData 2019 [4] [PDF], Code: N/A
Evasion Attack TDGIA: Effective Injection Attacks on Graph Neural Networks KDD 2021 [5] [PDF], [Code]
Poisoning Attack Adversarial Attacks on Graph Neural Networks via Meta Learning ICLR 2019 [6] [PDF], [Code]
Poisoning Attack (Backdoor) Backdoor Attacks to Graph Neural Networks SACMAT 2021 [7] [PDF], [Code]
Poisoning Attack (Backdoor) Unnoticeable Backdoor Attacks on Graph Neural Networks WWW 2023 [8] [PDF], [Code]
Poisoning Attack (Backdoor) SPEAR: A Structure-Preserving Manipulation Method for Graph Backdoor Attacks WWW 2025 [9] [PDF], [Code]

Note

For PoisonProbe, I did not find a public official code repository during verification, so the code field is marked as N/A.

3.2. Graph Adversarial Defense

The following papers are filtered as broadly model-level defense works. The scope here is intentionally relaxed: besides strict model-level privacy defenses, we also include papers on privacy-preserving GNN training/inference, graph reconstruction defense, training-graph protection, and model/IP protection mechanisms that are closely related to defending trained GNN systems.

Paper Title Venue Year Ref Materials
NetFense: Adversarial Defenses against Privacy Attacks on Neural Networks for Graph Data IEEE TKDE 2023 [10] [PDF], [Code]
Locally Private Graph Neural Networks ACM CCS 2021 [11] [PDF], [Code]
Releasing Graph Neural Networks with Differential Privacy Guarantees AISec Workshop 2021 [12] [PDF]
GAP: Differentially Private Graph Neural Networks with Aggregation Perturbation USENIX Security 2023 [13] [PDF], [Code]
SecGNN: Privacy-Preserving Graph Neural Network Training and Inference as a Cloud Service IEEE TSC 2023 [14] [Paper]
MDP: Privacy-Preserving GNN Based on Matrix Decomposition and Differential Privacy IEEE JCC 2023 [15] [Paper]
On Strengthening and Defending Graph Reconstruction Attack with Markov Chain Approximation ICML 2023 [16] [PDF], [Code]
OblivGNN: Oblivious Inference on Transductive and Inductive Graph Neural Networks USENIX Security 2024 [17] [PDF]
GRID: Protecting Training Graph from Link Stealing Attacks on GNN Models IEEE S&P 2025 [18] [Paper]
Watermarking Graph Neural Networks based on Backdoor Attacks IEEE EuroS&P 2023 [19] [Paper]
Deep anomaly detection on attributed networks SDM 2019 [20] [Paper]
Graph-level anomaly detection via hierarchical memory networks ECML PKDD 2023 [21] [PDF], [Code]
Gad-nr: Graph anomaly detection via neighborhood reconstruction WSDM 2024 [22] [PDF], [Code]
Anomaly Detection on Attributed Networks via Contrastive Self-Supervised Learning IEEE TNNLS 2021 [23] [PDF]
One-class graph neural networks for anomaly detection in attributed networks Neurcomputing 2021 [24] [PDF], [Code]
Adversarial Examples on Graph Data: Deep Insights into Attack and Defense IJCAI 2019 [25] [PDF]
All you need is low (rank) defending against adversarial attacks on graphs WSDM 2020 [26] [PDF]
Graph structure learning for robust graph neural networks KDD 2020 [27] [PDF], [Code]
Reliable Representations Make A Stronger Defender: Unsupervised Structure Refinement for Robust GNN KDD 2022 [28] [PDF]
Initializing Then Refining: A Simple Graph Attribute Imputation Network IJCAI 2022 [29] [PDF]
Dilution of Unreliable Information: Learning in Graph with Noisy Structures and Absent Attributes ICDM 2025 [30] [PDF]
Learning Strong Graph Neural Networks with Weak Information KDD 2023 [31] [PDF], [Code]

4. Key Conferences/Workshops/Journals

4.1. Conferences & Workshops

ACM International Conference on Knowledge Discovery and Data Mining (SIGKDD).

ACM International Conference on Management of Data (SIGMOD)

The Web Conference (WWW)

IEEE International Conference on Data Mining (ICDM)

SIAM International Conference on Data Mining (SDM)

IEEE International Conference on Data Engineering (ICDE)

ACM InternationalConference on Information and Knowledge Management (CIKM)

ACM International Conference on Web Search and Data Mining (WSDM)

The European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD)

The Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD)

4.2. Journals

ACM Transactions on Knowledge Discovery from Data (TKDD)

IEEE Transactions on Knowledge and Data Engineering (TKDE)

ACM SIGKDD Explorations Newsletter

Data Mining and Knowledge Discovery

Knowledge and Information Systems (KAIS)


References

[1]Deyu Zhu, Qinkai Zheng, Yang Liu. 2021. Graph Robustness Benchmark: Benchmarking the Adversarial Robustness of Graph Machine Learning. In Advances in Neural Information Processing Systems (NeurIPS).
[2]Xiang Ao, Yang Liu, Guansong Pang, Yuanhao Ding, Hezhe Qiao, Dawei Cheng, and Qing He. 2026. Robust Graph Learning on the Web: Challenges, Methods, and Applications. In Companion Proceedings of the ACM Web Conference (WWW).
[3]Daniel Zügner, Amir Akbarnejad, and Stephan Günnemann. 2018. Adversarial Attacks on Neural Networks for Graph Data. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD), 2847–2856. https://doi.org/10.1145/3219819.3220078.
[4]Tsubasa Takahashi. 2019. Indirect Adversarial Attacks via Poisoning Neighbors for Graph Convolutional Networks. In 2019 IEEE International Conference on Big Data (Big Data), 1395–1400.
[5]Xu Zou, Qinkai Zheng, Yuxiao Dong, Xinyu Guan, Evgeny Kharlamov, Jialiang Lu, and Jie Tang. 2021. TDGIA: Effective Injection Attacks on Graph Neural Networks. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), 2461–2471. https://doi.org/10.1145/3447548.3467314.
[6]Daniel Zügner and Stephan Günnemann. 2019. Adversarial Attacks on Graph Neural Networks via Meta Learning. In International Conference on Learning Representations (ICLR).
[7]Zaixi Zhang, Jinyuan Jia, Binghui Wang, and Neil Zhenqiang Gong. 2021. Backdoor Attacks to Graph Neural Networks. In Proceedings of the 26th ACM Symposium on Access Control Models and Technologies (SACMAT), 15–26. https://doi.org/10.1145/3450569.3463560.
[8]Enyan Dai, Minhua Lin, Xiang Zhang, and Suhang Wang. 2023. Unnoticeable Backdoor Attacks on Graph Neural Networks. In Proceedings of the ACM Web Conference 2023 (WWW), 2263–2273. https://doi.org/10.1145/3543507.3583392.
[9]Yuanhao Ding, Yang Liu, Yugang Ji, Weigao Wen, Qing He, and Xiang Ao. 2025. SPEAR: A Structure-Preserving Manipulation Method for Graph Backdoor Attacks. In Proceedings of the ACM Web Conference 2025 (WWW), 1237–1247. https://doi.org/10.1145/3696410.3714665.
[10]Hsieh, I.-C. and Li, C.-T., 2023. NetFense: Adversarial Defenses against Privacy Attacks on Neural Networks for Graph Data. IEEE Transactions on Knowledge and Data Engineering, 35(1), pp.796-809.
[11]Sajadmanesh, S. and Gatica-Perez, D., 2021. Locally Private Graph Neural Networks. In Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, pp.2130-2145.
[12]Olatunji, I.E., Nejdl, W. and Khosla, M., 2021. Releasing Graph Neural Networks with Differential Privacy Guarantees. In Proceedings of the 14th ACM Workshop on Artificial Intelligence and Security, pp.31-40.
[13]Sajadmanesh, S., Shamsabadi, A.S., Bellet, A. and Gatica-Perez, D., 2023. GAP: Differentially Private Graph Neural Networks with Aggregation Perturbation. In 32nd USENIX Security Symposium (USENIX Security 23), pp.3223-3240.
[14]Wang, S., Zheng, Y. and Jia, X., 2023. SecGNN: Privacy-Preserving Graph Neural Network Training and Inference as a Cloud Service. IEEE Transactions on Services Computing, 16(4), pp.2923-2938.
[15]Xu, W., Shi, B., Zhang, J., Feng, Z., Pan, T. and Dong, B., 2023. MDP: Privacy-Preserving GNN Based on Matrix Decomposition and Differential Privacy. In 2023 IEEE 14th International Conference on Joint Cloud Computing (JCC), pp.38-45.
[16]Zhou, Z., Zhou, C., Li, X., Yao, J., Yao, Q. and Han, B., 2023. On Strengthening and Defending Graph Reconstruction Attack with Markov Chain Approximation. In Proceedings of the 40th International Conference on Machine Learning (ICML), pp.41803-41833.
[17]Xu, Z., Lai, S., Liu, X., Abuadbba, A., Yuan, X. and Yi, X., 2024. OblivGNN: Oblivious Inference on Transductive and Inductive Graph Neural Network. In 33rd USENIX Security Symposium (USENIX Security 24), pp.2209-2226.
[18]Lou, J., Yuan, X., Zhang, R., Yuan, X., Gong, N.Z. and Tzeng, N.-F., 2025. GRID: Protecting Training Graph from Link Stealing Attacks on GNN Models. In 2025 IEEE Symposium on Security and Privacy (IEEE S&P 2025), pp.2095-2113.
[19]Xu, J., Koffas, S., Ersoy, O. and Picek, S., 2023. Watermarking Graph Neural Networks based on Backdoor Attacks. In 2023 IEEE European Symposium on Security and Privacy (EuroS&P 2023), pp.1179-1197.
[20]Kaize Ding, Jundong Li, Rohit Bhanushali, and Huan Liu. 2019. Deep anomaly detection on attributed networks. In Proceedings of the 2019 SIAM International Conference on Data Mining (SDM), 594–602. https://doi.org/10.1137/1.9781611975673.67.
[21]Chaoxi Niu, Guansong Pang, and Ling Chen. 2023. Graph-level anomaly detection via hierarchical memory networks. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases (ECML PKDD), 201–218.
[22]Amit Roy, Juan Shu, Jia Li, Carl Yang, Olivier Elshocht, Jeroen Smeets, and Pan Li. 2024. Gad-nr: Graph anomaly detection via neighborhood reconstruction. In Proceedings of the 17th ACM International Conference on Web Search and Data Mining (WSDM), 576–585.
[23]Yixin Liu, Zhao Li, Shirui Pan, Chen Gong, Chuan Zhou, and George Karypis. 2021. Anomaly detection on attributed networks via contrastive self-supervised learning. IEEE Transactions on Neural Networks and Learning Systems, 33(6), 2378–2392.
[24]Xuhong Wang, Baihong Jin, Ying Du, Ping Cui, Yingshui Tan, and Yupu Yang. 2021. One-class graph neural networks for anomaly detection in attributed networks. Neurocomputing, 33, 12073–12085.
[25]Huijun Wu, Chen Wang, Yuriy Tyshetskiy, Andrew Docherty, Kai Lu, and Liming Zhu. 2019. Adversarial examples on graph data: Deep insights into attack and defense. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI), 4816–4823. https://doi.org/10.24963/ijcai.2019/669.
[26]Negin Entezari, Saba A. Al-Sayouri, Amirali Darvishzadeh, and Evangelos E. Papalexakis. 2020. All you need is low (rank) defending against adversarial attacks on graphs. In Proceedings of the 13th ACM International Conference on Web Search and Data Mining (WSDM), 169–177.
[27]Wei Jin, Yao Ma, Xiaorui Liu, Xianfeng Tang, Suhang Wang, and Jiliang Tang. 2020. Graph structure learning for robust graph neural networks. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), 66–74.
[28]Kuan Li, Yang Liu, Xiang Ao, Jianfeng Chi, Jinghua Feng, Hao Yang, and Qing He. 2022. Reliable Representations Make A Stronger Defender:Unsupervised Structure Refinement for RobustGNN. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), 925–935.
[29]Wenxuan Tu, Sihang Zhou, Xinwang Liu, Yue Liu, Zhiping Cai, En Zhu, Changwang Zhang, and Jieren Cheng. 2022. Initializing Then Refining: A Simple Graph Attribute Imputation Network. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), 3494–3500.
[30]Xinxin Li, Yang Liu, Siyong Xu, Weigao Wen, Qing He, and Xiang Ao. 2025. Dilution of Unreliable Information: Learning in Graph with Noisy Structures and Absent Attributes. In Proceedings of the IEEE International Conference on Data Mining (ICDM), 1360–1369.
[31]Yixin Liu, Kaize Ding, Jianling Wang, Vincent Lee, Huan Liu, and Shirui Pan. 2023. Learning Strong Graph Neural Networks with Weak Information. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), 1559–1571.

About

Robust graph learning is to develop learning algorithms that maintain predictive accuracy and stability in the presence of structural noise, adversarial perturbations, and out-of-distribution (OOD) shifts.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors