Skip to content

bad-antics/nullsec-adversarial

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

3 Commits
Β 
Β 

Repository files navigation

🧠 NullSec Adversarial

Adversarial Machine Learning Attack Toolkit

Python License NullSec

Evasion, poisoning, and extraction attacks against ML models


🎯 Overview

NullSec Adversarial is a comprehensive toolkit for testing machine learning model robustness. It implements state-of-the-art adversarial attacks β€” evasion (FGSM, PGD, C&W, AutoAttack), model extraction, membership inference, and model inversion β€” across image classifiers, NLP models, and tabular ML pipelines.

⚑ Features

Feature Description
Evasion Attacks FGSM, PGD, C&W, DeepFool, AutoAttack
Model Extraction Query-based model stealing with knockoff networks
Membership Inference Determine if a sample was in training data
Model Inversion Reconstruct training data from model outputs
Transferability Generate transferable adversarial examples
Defence Evaluation Test adversarial training, certified defences
Framework Support PyTorch, TensorFlow, scikit-learn, ONNX

οΏ½οΏ½ Attack Matrix

Attack Type Domain Threat Model
FGSM Evasion Image/NLP White-box
PGD Evasion Image/NLP White-box
C&W Evasion Image White-box
AutoAttack Evasion Image White-box
HopSkipJump Evasion Image Black-box
Knockoff Nets Extraction Any Black-box
Shadow Models Membership Any Black-box
MI-FACE Inversion Image White-box

πŸš€ Quick Start

# Run PGD evasion attack on an image classifier
nullsec-adversarial evasion pgd --model resnet50.onnx --input samples/ --eps 0.03

# Black-box model extraction
nullsec-adversarial extract --target-url http://api.example.com/predict --queries 10000

# Membership inference attack
nullsec-adversarial membership --model target.pt --members train.csv --non-members test.csv

# Evaluate adversarial robustness
nullsec-adversarial benchmark --model model.pt --dataset cifar10 --attacks all

πŸ”— Related Projects

Project Description
nullsec-llmred LLM red-teaming framework
nullsec-datapoisoning Training data poisoning detection
nullsec-modelaudit ML model security auditing
nullsec-promptinject Prompt injection payloads
nullsec-linux Security Linux distro (140+ tools)

⚠️ Legal

For authorized ML security testing only. Do not use against models or systems without explicit permission.

πŸ“œ License

MIT License β€” @bad-antics


About

AI/ML Security Tool - Part of NullSec Linux

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors