Skip to content

Commit ecdc255

Browse files
committed
Update
1 parent 8d3263b commit ecdc255

File tree

5 files changed

+6
-6
lines changed

5 files changed

+6
-6
lines changed

abstract.tex

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
% $Id: abstract.tex,v 1.1.1.1 2007/10/03 22:28:18 hridesh Exp $
22
%Title: Enumerator, Packing-Unpacking and Zip Based on Lambda Calculus
3-
Deep neural networks have gained large popularity in the past decade. Compared to the non-ML based system, there is not much existing verification mechanism for these learning-based models. One such verification technique includes the contract between the advertised trustworthiness and the actual performance of these models. Our study identifies that due to the random initialization, accuracy i.e., trustworthiness changes even if the experimental setup remains unchanged. In order to address this issue, we have proposed an approach to identify these randomly initialized parameters and apply the search algorithm to find the corresponding value that provides the optimum accuracy.
3+
Deep neural networks have gained large popularity in the past decade. Compared to the non-ML based system, there is not much existing verification mechanism for these learning-based models. One such verification technique includes the contract between the advertised trustworthiness and the actual performance of these models. Our study identifies that due to the random initialization, accuracy i.e., trustworthiness changes even if the experimental setup remains unchanged. In order to address this issue, we have proposed an approach to identify these randomly initialized parameters and apply the search algorithm to find the corresponding value that provides the near optimal accuracy.
44
We have also proposed a user intent based technique to restrict the searching process in terms of time, trial and accuracy gain that helps a DNN model to achieve accountability in the aspect of classification accuracy.
55

66
%We have also proposed a specification language to restrict the learning process that helps a DNN model to achieve accountability in the aspect of classification accuracy.

approach.tex

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ \section{Approach}
55
In the \S\ref{sec:background}, we have discussed the forward and backward propagation. In the two-step learning process, random initialization plays a crucial role in model validation.
66
%In Figure \ref{fig:rq5}, a traditional DNN model iteratively chooses the input. The choice can be a single input at a time or a group of inputs. Three known parameters require random initialization i.e., seed, weight, and bias \cite{sutskever2013importance}.
77
%Our hypothesis in this study states that \emph{$H_0$: Knowing the distribution of the random initialized parameter can provide the distribution of the output parameter, e.g., accuracy metric.} Based on this hypothesis, we select the operations that require random initialization. Then, with the known distribution, we perform the model operations as follows.
8-
In Figure \ref{fig:flow}, we have depicted the overview of the proposed approach. In the beginning, when we started the project our intuition was that we need to experimentally evaluate the value corresponding to weight and bias to understand the distributions and initialization process. 3 Ph.D. students went through the documentation. While understanding the Keras documentation, we have found that there are two parameters that are responsible for a weight and bias initialization. In order to modify the weight and bias and validate the accuracy, we need to convert the models into its imperative version. Once, we get that, we search and update the weight and bias. However, to restrict the searching process, we have considered three users’ intent i.e., max time, max gain, and max trial to find the near optimal accuracy based on the given constraints.
8+
In Figure \ref{fig:flow}, we have depicted the overview of the proposed approach. In the beginning, when we started the project our intuition was that we need to experimentally evaluate the value corresponding to weight and bias to understand the distributions and initialization process. To understand the implementation of the \emph{Keras} based operations, 3 Ph.D. students went through the documentation. While understanding the Keras documentation, we have found that there are two parameters that are responsible for a weight and bias initialization. In order to modify the weight and bias and validate the accuracy, we need to convert the models into its imperative version. Once, we get that, we search and update the weight and bias. However, to restrict the searching process, we have considered three users’ intent i.e., max time, max gain, and max trial to find the near optimal accuracy based on the given constraints.
99

1010
%\begin{equation}
1111
%f(\sum_{\chi}{W_iX_i+B}), \chi\sim D(\mu, \sigma^2)

intro.tex

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,9 @@
11
% $Id: intro.tex,v 1.1.1.1 2007/10/03 22:28:18 hridesh Exp $
22

33
\section{Introduction}
4-
With the increasing popularity of Machine Learning (ML) based systems, the verification and validation of such systems are necessary. Although there is a vast amount of research carried out on the non-ML validation framework, there is a few validation based research done on the ML-based systems due to the nature of probabilistic and complexity. Without a proper validation framework, one might ask about the way to hold ML-based systems being accountable to the expectation. For instance, the contract made with the end-user by exposing the trustworthiness of these systems in terms of accuracy. But this metric of trustworthiness is variant even if the whole experimental setup remains the same. In this study, we address such problems and have proposed a programming language infrastructure to measure the optimum accuracy.
4+
With the increasing popularity of Machine Learning (ML) based systems, the verification and validation of such systems are necessary. Although there is a vast amount of research carried out on the non-ML validation framework, there is a few validation based research done on the ML-based systems due to the nature of probabilistic and complexity. Without a proper validation framework, one might ask about the way to hold ML-based systems being accountable to the expectation. For instance, the contract made with the end-user by exposing the trustworthiness of these systems in terms of accuracy. But this metric of trustworthiness is variant even if the whole experimental setup remains the same. In this study, we address such problems and have proposed a searching algorithm based on the users' intent to measure the near optimal accuracy.
55

6-
The recent works on this field can be primarily categorized into two sections, verifying a model to be accountable for the assigned task \cite{pulina2010abstraction,gehr2018ai2,du2018techniques,abdul2018trends,zhang2016understanding} and holding the accountability by making ML models more robust \cite{wang2018formal,katz2017reluplex,jia2019taso}. The prior works have focused on validating the input influence \cite{datta2016algorithmic}, explaining the models to make the black-box system grayer. However, these systems either hold domain knowledge or model operation knowledge as the key to increase the explainability. In this study, we have combined these two type of knowledge and propose a system that can takes the input dataset, model operations and their distributions as input and produce a optimum accuracy rather than a single value that changes everytime an ML model has been trained with same dataset and same experimental setup.
6+
The recent works on this field can be primarily categorized into two sections, verifying a model to be accountable for the assigned task \cite{pulina2010abstraction,gehr2018ai2,du2018techniques,abdul2018trends,zhang2016understanding} and holding the accountability by making ML models more robust \cite{wang2018formal,katz2017reluplex,jia2019taso}. The prior works have focused on validating the input influence \cite{datta2016algorithmic}, explaining the models to make the black-box system grayer. However, these systems either hold domain knowledge or model operation knowledge as the key to increase the explainability. In this study, we have combined these two type of knowledge and propose a system that can takes the input dataset, model operations and users' choice as input and produce a near optimal accuracy rather than a single value that changes everytime an ML model has been trained with same dataset and same experimental setup.
77
%We leverage the information to propose a specification language \emph{ADNN} that verify the model's capability.
88
We leverage the information to propose a user intent based local search based approach, \emph{ADNN} that verify the model's capability in terms of classification accuracy.
99
%While there are works on holding an ML-based system accountable for the assigned task, these works are primarily categorized into two divisions, accountability validation and increasing the explainability of such systems. Our approach is obtaining the range of seed values, bias, and weight to verify the deep neural network (DNN). Specifically, the DNNs randomly generate these values whenever it trains. Moreover, with different sets of seed value, bias, and weight, we can obtain different output values. Therefore, if we can identify the range, from the lowest value to the highest value, of seed values, bias, and weight, we can acquire the range of output value. We will perform our method in multiple DNNs in the same dataset and get the range of the output of these DNNs. By comparing DNNs' out ranges, we can verify whether a DNN is good or bad.

relatedwork.tex

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -19,9 +19,9 @@ \subsection{\textbf{Accuracy validation}}
1919

2020
\subsection{Accountability}
2121
According to the study \cite{veale2018fairness}, accountability in decision making represents the explanation about the "ongoing strategy". From the \S\ref{sec:motivation}, we have found that a single model structure can provide different decision-making capabilities due to the assertion of the probabilistic distribution in the initialization process. Our preliminary evidence has shown that a DNN model can end up achieving different accuracy in a different setting. A simple question can be asked? How can we really say that a DNN model performs what it advertises? Is there a reliable solution that does not change with the settings?
22-
To answer this question and holding the DNN model accountable in terms of the reported accuracy, we propose an approach ADNN or accountable DNN.
22+
To answer these questions and holding the DNN model accountable in terms of the reported accuracy, we propose an approach named ADNN or accountable DNN.
2323

24-
In our proposed approach, we learn how initialization parameter varies the accuracy based on the manual study of the \emph{Keras} documentation. Then, we update the weight and bias parameters. We have implemented a search and update based approach to modify the weight and bias. The process is restricted based on the users’ intent i.e., max time, gain, and trial.
24+
In our proposed approach, we learn how initialization parameter varies the accuracy based on the manual study of the \emph{Keras} documentation. We have implemented a search and update based approach to modify the weight and bias. The process is restricted based on the users’ intent i.e., max time, gain, and trial.
2525

2626

2727
%Zhang et al. \cite{zhang2018interpretable} proposed a method to modify existing convolutional neural network (CNN) based models to make it more interpretable by encoding more meaningful semantics. In this case, CNN's classification accuracy may decrease a bit because when an interpretable CNN model has been deployed to classify a large number of categories simultaneously. Because filters in a convolutional layer are assigned with different categories. So, accuracy validation is crucial while making an interpretable CNN model.

report.pdf

53 Bytes
Binary file not shown.

0 commit comments

Comments
 (0)