You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In "On Inferring Training Data Attributes in Machine Learning Models", they get class labels with k-means clustering. But that's going to make the labels very smooth, i.e. there's no randomness or "unexpected" class labels. Because MIA/AIA techniques rely on confidence of the model's output, those "unexpected" labels might be harder to infer in an attack. So, let's try to introduce randomness in the k-means labelling of the data.
Extend to entropy along with maximum confidence
To-do
Tasks
Jordan: implement CNN in pytorch as done in https://arxiv.org/pdf/1912.02292.pdf with param for modifying model width. Verify it works in our pipeline with CIFAR10 and get a sense for how long it takes to run.
Alex: Extend pipeline to work with multiple trials
Rithvik: Jaccard distance for comparing images & Figure out how to add a variable amount of noise to k-means labels and then run some preliminary experiments