%While there are works on holding an ML-based system accountable for the assigned task, these works are primarily categorized into two divisions, accountability validation and increasing the explainability of such systems. Our approach is obtaining the range of seed values, bias, and weight to verify the deep neural network (DNN). Specifically, the DNNs randomly generate these values whenever it trains. Moreover, with different sets of seed value, bias, and weight, we can obtain different output values. Therefore, if we can identify the range, from the lowest value to the highest value, of seed values, bias, and weight, we can acquire the range of output value. We will perform our method in multiple DNNs in the same dataset and get the range of the output of these DNNs. By comparing DNNs' out ranges, we can verify whether a DNN is good or bad.
0 commit comments