You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Robust graph learning is to develop learning algorithms that maintain predictive accuracy and stability in the presence of structural noise, adversarial perturbations, and out-of-distribution (OOD) shifts.
This repository collects:
Academic Papers
Online Courses and Videos
Graph Datasets
Open-source and Commercial Libraries/Toolkits
Key Conferences & Journals
More items will be added to the repository.
Please feel free to suggest other key resources by opening an issue report,
submitting a pull request, or dropping me an email @ (liuyang173@mails.ucas.edu.cn).
Enjoy reading!
For PoisonProbe, I did not find a public official code repository during verification, so the code field is marked as N/A.
3.2. Graph Adversarial Defense
The following papers are filtered as broadly model-level defense works. The scope here is intentionally relaxed: besides strict model-level privacy defenses, we also include papers on privacy-preserving GNN training/inference, graph reconstruction defense, training-graph protection, and model/IP protection mechanisms that are closely related to defending trained GNN systems.
Paper Title
Venue
Year
Ref
Materials
NetFense: Adversarial Defenses against Privacy Attacks on Neural Networks for Graph Data
Deyu Zhu, Qinkai Zheng, Yang Liu. 2021. Graph Robustness Benchmark: Benchmarking the Adversarial Robustness of Graph Machine Learning. In Advances in Neural Information Processing Systems (NeurIPS).
Xiang Ao, Yang Liu, Guansong Pang, Yuanhao Ding, Hezhe Qiao, Dawei Cheng, and Qing He. 2026. Robust Graph Learning on the Web: Challenges, Methods, and Applications. In Companion Proceedings of the ACM Web Conference (WWW).
Daniel Zügner, Amir Akbarnejad, and Stephan Günnemann. 2018. Adversarial Attacks on Neural Networks for Graph Data. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD), 2847–2856. https://doi.org/10.1145/3219819.3220078.
Tsubasa Takahashi. 2019. Indirect Adversarial Attacks via Poisoning Neighbors for Graph Convolutional Networks. In 2019 IEEE International Conference on Big Data (Big Data), 1395–1400.
Xu Zou, Qinkai Zheng, Yuxiao Dong, Xinyu Guan, Evgeny Kharlamov, Jialiang Lu, and Jie Tang. 2021. TDGIA: Effective Injection Attacks on Graph Neural Networks. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), 2461–2471. https://doi.org/10.1145/3447548.3467314.
Daniel Zügner and Stephan Günnemann. 2019. Adversarial Attacks on Graph Neural Networks via Meta Learning. In International Conference on Learning Representations (ICLR).
Zaixi Zhang, Jinyuan Jia, Binghui Wang, and Neil Zhenqiang Gong. 2021. Backdoor Attacks to Graph Neural Networks. In Proceedings of the 26th ACM Symposium on Access Control Models and Technologies (SACMAT), 15–26. https://doi.org/10.1145/3450569.3463560.
Enyan Dai, Minhua Lin, Xiang Zhang, and Suhang Wang. 2023. Unnoticeable Backdoor Attacks on Graph Neural Networks. In Proceedings of the ACM Web Conference 2023 (WWW), 2263–2273. https://doi.org/10.1145/3543507.3583392.
Yuanhao Ding, Yang Liu, Yugang Ji, Weigao Wen, Qing He, and Xiang Ao. 2025. SPEAR: A Structure-Preserving Manipulation Method for Graph Backdoor Attacks. In Proceedings of the ACM Web Conference 2025 (WWW), 1237–1247. https://doi.org/10.1145/3696410.3714665.
Hsieh, I.-C. and Li, C.-T., 2023. NetFense: Adversarial Defenses against Privacy Attacks on Neural Networks for Graph Data. IEEE Transactions on Knowledge and Data Engineering, 35(1), pp.796-809.
Sajadmanesh, S. and Gatica-Perez, D., 2021. Locally Private Graph Neural Networks. In Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, pp.2130-2145.
Olatunji, I.E., Nejdl, W. and Khosla, M., 2021. Releasing Graph Neural Networks with Differential Privacy Guarantees. In Proceedings of the 14th ACM Workshop on Artificial Intelligence and Security, pp.31-40.
Wang, S., Zheng, Y. and Jia, X., 2023. SecGNN: Privacy-Preserving Graph Neural Network Training and Inference as a Cloud Service. IEEE Transactions on Services Computing, 16(4), pp.2923-2938.
Xu, W., Shi, B., Zhang, J., Feng, Z., Pan, T. and Dong, B., 2023. MDP: Privacy-Preserving GNN Based on Matrix Decomposition and Differential Privacy. In 2023 IEEE 14th International Conference on Joint Cloud Computing (JCC), pp.38-45.
Zhou, Z., Zhou, C., Li, X., Yao, J., Yao, Q. and Han, B., 2023. On Strengthening and Defending Graph Reconstruction Attack with Markov Chain Approximation. In Proceedings of the 40th International Conference on Machine Learning (ICML), pp.41803-41833.
Lou, J., Yuan, X., Zhang, R., Yuan, X., Gong, N.Z. and Tzeng, N.-F., 2025. GRID: Protecting Training Graph from Link Stealing Attacks on GNN Models. In 2025 IEEE Symposium on Security and Privacy (IEEE S&P 2025), pp.2095-2113.
Xu, J., Koffas, S., Ersoy, O. and Picek, S., 2023. Watermarking Graph Neural Networks based on Backdoor Attacks. In 2023 IEEE European Symposium on Security and Privacy (EuroS&P 2023), pp.1179-1197.
Kaize Ding, Jundong Li, Rohit Bhanushali, and Huan Liu. 2019. Deep anomaly detection on attributed networks. In Proceedings of the 2019 SIAM International Conference on Data Mining (SDM), 594–602. https://doi.org/10.1137/1.9781611975673.67.
Chaoxi Niu, Guansong Pang, and Ling Chen. 2023. Graph-level anomaly detection via hierarchical memory networks. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases (ECML PKDD), 201–218.
Amit Roy, Juan Shu, Jia Li, Carl Yang, Olivier Elshocht, Jeroen Smeets, and Pan Li. 2024. Gad-nr: Graph anomaly detection via neighborhood reconstruction. In Proceedings of the 17th ACM International Conference on Web Search and Data Mining (WSDM), 576–585.
Huijun Wu, Chen Wang, Yuriy Tyshetskiy, Andrew Docherty, Kai Lu, and Liming Zhu. 2019. Adversarial examples on graph data: Deep insights into attack and defense. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI), 4816–4823. https://doi.org/10.24963/ijcai.2019/669.
Negin Entezari, Saba A. Al-Sayouri, Amirali Darvishzadeh, and Evangelos E. Papalexakis. 2020. All you need is low (rank) defending against adversarial attacks on graphs. In Proceedings of the 13th ACM International Conference on Web Search and Data Mining (WSDM), 169–177.
Wei Jin, Yao Ma, Xiaorui Liu, Xianfeng Tang, Suhang Wang, and Jiliang Tang. 2020. Graph structure learning for robust graph neural networks. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), 66–74.
Kuan Li, Yang Liu, Xiang Ao, Jianfeng Chi, Jinghua Feng, Hao Yang, and Qing He. 2022. Reliable Representations Make A Stronger Defender:Unsupervised Structure Refinement for RobustGNN. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), 925–935.
Wenxuan Tu, Sihang Zhou, Xinwang Liu, Yue Liu, Zhiping Cai, En Zhu, Changwang Zhang, and Jieren Cheng. 2022. Initializing Then Refining: A Simple Graph Attribute Imputation Network. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), 3494–3500.
Xinxin Li, Yang Liu, Siyong Xu, Weigao Wen, Qing He, and Xiang Ao. 2025. Dilution of Unreliable Information: Learning in Graph with Noisy Structures and Absent Attributes. In Proceedings of the IEEE International Conference on Data Mining (ICDM), 1360–1369.
Yixin Liu, Kaize Ding, Jianling Wang, Vincent Lee, Huan Liu, and Shirui Pan. 2023. Learning Strong Graph Neural Networks with Weak Information. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), 1559–1571.
About
Robust graph learning is to develop learning algorithms that maintain predictive accuracy and stability in the presence of structural noise, adversarial perturbations, and out-of-distribution (OOD) shifts.