Hello. I'm appreciate you could notice this problem. I read your paper about the 'ROBOT', fuzzing for robustness.
I'm trying to get the result about the robustness of the retrained model by using generated data of fuzzing method. I have saved some mnist data generated by 'adapt' with 5min run on Lenet5, and then I used these data concatenated with original data of mnist to retrain the tested model multiple times. Finally, I used another FGSM adversarial samples generated by 'gen_adv.py' to test the accuracy of classification for adv samples of retrained model.
But the result is bad. Only less than 1% of samples are correctly classified. Could you give me some suggestions, please?
# some codes are not pasted on this textboard.
# This is the code to read all generated data from adapt.
for i in adapts_data:
for k,v in i.inputs.items():
if k !=i.label:
ltv=len(v)
if ltv==1:
adapt_xt=np.concatenate((adapt_xt,v[0]),axis=0)
adapt_xv=np.concatenate((adapt_xv,v[0]),axis=0)
y=np.zeros(10)
y[int(k)]=1
adapt_yt=np.concatenate((adapt_yt,[y]),axis=0)
adapt_yv=np.concatenate((adapt_yv,[y]),axis=0)
else:
lt=int(ltv*0.8)
lv=ltv-lt
y=np.zeros(10)
y[int(k)]=1
#adapt_xt: data for training set
#adapt_yt: labels for training set
#adapt_x: data for testing set
#adapt_yv: labels for testing set
adapt_xt=np.concatenate((adapt_xt,np.reshape(v[:lt],(-1,28,28,1))),axis=0)
adapt_yt=np.concatenate((adapt_yt,np.tile(y,(lt,1))),axis=0)
adapt_xv=np.concatenate((adapt_xv,np.reshape(v[lt:],(-1,28,28,1))),axis=0)
adapt_yv=np.concatenate((adapt_yv,np.tile(y,(lv,1))),axis=0)
# These are the retraining procedure
sNums = [600*i for i in [1,2,3,4,6,8,10,12,16,20]]
acc_clean = []
acc_fp = []
if adapt:
for num in sNums:
model_path=model_path_pre+'model3-adapt_%d.h5' %(num)
checkpoint = ModelCheckpoint(filepath=model_path, monitor='val_accuracy', verbose=0, save_best_only=True)
callbacks = [checkpoint]
lenet5 = load_model("./Lenet5_mnist0.h5")
adapt_xt1 = np.concatenate((x_train, adapt_xt[:num]),axis=0)
adapt_yt1 = np.concatenate((y_train, adapt_yt[:num]),axis=0)
#lenet5.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
lenet5.fit(adapt_xt1, adapt_yt1, epochs=10, batch_size=64,verbose=0, callbacks=callbacks,
validation_data=(adapt_xv, adapt_yv))
best_model = keras.models.load_model(model_path)
_, aclean = best_model.evaluate(x_test, y_test, verbose=0)
_, afp = best_model.evaluate(adapt_xv, adapt_yv, verbose=0)
acc_clean.append(aclean)
acc_fp.append(afp)
Hello. I'm appreciate you could notice this problem. I read your paper about the 'ROBOT', fuzzing for robustness.
I'm trying to get the result about the robustness of the retrained model by using generated data of fuzzing method. I have saved some mnist data generated by 'adapt' with 5min run on Lenet5, and then I used these data concatenated with original data of mnist to retrain the tested model multiple times. Finally, I used another FGSM adversarial samples generated by 'gen_adv.py' to test the accuracy of classification for adv samples of retrained model.
But the result is bad. Only less than 1% of samples are correctly classified. Could you give me some suggestions, please?
This is my retraining code imitating 'select_retrain.py':