This repository is intended to house any and all papers that might be useful to read or cite for a uvulab project.
/imagescontains papers related to projects working on images data/textcontains papers related to projects working on text data
-
/images-
/adversarial examples-
/attacks- Adversarial Examples In The Physical World
- Adversarial Patch
- Black-box Adversarial Attacks with Limited Queries and Information
- Decision-Based Adversarial Attacks Reliable Attacks Against Black-Box Machine Learning Models
- Exploring the Landscape of Spatial Robustness
- Fast Gradient Sign Method
- HopSkipJumpAttack A Query-Efficient Decision-Based Attack
- Practical Black-Box Attacks Against Machine Learning
- Simple Black-Box Adversarial Perturbations for Deep Networks
- Towards Deep Learning Models Resistant to Adversarial Attacks
- Universal Adversarial Perturbations
- ZOO Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks without Training Substitute Models
-
/defenses- A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks
- Adversarial and Clean Data Are Not Twins
- Adversarial Training
- Detecting Adversarial Attacks on Neural Network Policies with Visual Foresight
- Detecting Adversarial Samples from Artifacts
- Early Methods for Detecting Adversarial Images
- Feature Squeezing
- Gaussian Data Augmentation
- JPEG Compression
- Label Smoothing
- MagNet - Two-Pronged Defense against Adversarial Examples
- On Detecting Adversarial Perturbations
- On the (Statistical) Detection of Adversarial Examples
- PixelDefend
- Robust Detection of Adversarial Attacks by Modeling the Intrinsic Properties of Deep Neural Networks
- Robustness to Adversarial Examples Through an Ensemble of Specialists
- SafetyNet
- Spatial Smoothing
- Thermometer Encoding
- Total Variance Minimization
- Towards Robust Detection of Adversarial Examples
-
/resizing
-
-
-
/text-
/adversarial examples-
/attacks -
/defenses- Adversarial Training Methods For Semi-Supervised Text Classification
- Black-box Generation of Adversarial Text Sequences to Evade Deep Learning Classifiers
- Combating Adversarial Misspellings with Robust Word Recognition
- Generating Natural Language Adversarial Examples
- HotFlip
- Learning to Discriminate Perturbations for Blocking Adversarial Attacks in Text Classification
- Ranking Robustness Under Adversarial Document Manipulation
- TextBugger
-