- AI Threat Landscape Report
- Artificial Intelligence Cybersecurity Challenges
- Artificial Intelligence: How to make Machine Learning Cyber Secure?
- Securing Machine Learning Algorithms
- AI/ML Pivots to the Security Development Lifecycle Bug Bar
- AI security risk assessment using Counterfit
- A holistic approach to PPML (Privacy Preserving ML)
- Confidential AI - Confidential ONNX Inference Server
- Counterfit
- Failure Modes in Machine Learning
- Practical application of artificial intelligence that can transform cybersecurity
- Privacy Preserving Machine Learning: Maintaining confidentiality and preserving trust
- Securing the Future of Artificial Intelligence and Machine Learning at Microsoft
- Threat Modeling AI/ML Systems and Dependencies
- Unlocking the potential of Privacy-Preserving AI with Azure Confidential Computing on NVIDIA H100
- Modernize Cybersecurity With AI
- NVIDIA Developer Blog - Federated Learning with Homomorphic Encryption
- NVIDIA GPU Confidential Computing
- NVIDIA Morpheus AI Cybersecurity Framework
- MITRE - Adversarial ML 101
- MITRE - Adversarial ML Threat Matrix
- MITRE ATLAS, Adversarial Threat Landscape for Artificial-Intelligence Systems
- NIST Artificial Intelligence Risk Management Framework (AI RMF 1.0)
- NISTIR 8332 - Trust and Artificial Intelligence
- NISTIR 8269 - A Taxonomy and Terminology of Adversarial Machine Learning
- Deep Learning with Differential Privacy - Abadi et al
- Dos and Don'ts of Machine Learning in Computer Security - Arp et al
- Entangled Watermarks as a Defense against Model Extraction - Jia et al
- Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning - Jagielski et al
- On Adaptive Attacks to Adversarial Example Defenses - Florian Tramer et al -- [slides] [video]
- Practical Black-Box Attacks against Machine Learning - Papernot et al
- Privacy-Preserving Machine Learning: Methods, Challenges and Directions - R Xu et al
- Linaro and Arm Confidential AI Tech Event, June 28th, 2022
- Confidential AI: Secure Data Processing Pipeline
- Confidential AI: Building confidence in your ML models
- Confidential AI: Arduino Pro’s recipe
- Confidential AI: Towards Pervasive and Trustworthy Artificial Intelligence
- Confidential AI: Smart Homes without Spying
- Confidential AI: ... on Arm, why, what and how?
- Adversarial Robustness Toolbox
- DARPA GARD (Guaranteeing AI Robustness to Deception) Project
- Europol: Malicious Uses and Abuses of Artificial Intelligence
- OWASP AI Security and Privacy Guide
- PyTorch PPML Framework Tutorial
- Stealing Machine Learning Models via Prediction API
- Stiftung Neue Verantwortung: Securing Artificial Intelligence
- The Conference on Applied Machine Learning in Information Security (CAMLIS) sessions and slides
- The European Commission’s Artificial Intelligence Act
- The European Commission Whitepaper: On Artificial Intelligence - A European approach to excellence and trust
- The current state of AI security and AI safety standards in China, with reference to international standards
- Trustworthy Machine Learning course materials, Nicolas Papernot