Oscar Chang 阳光宅男 B-log

Can we Recover a Neural Network Exactly?

Given oracle access to a blackbox neural network, is it possible to recover its weights (up to equivalence of computing the same function) exactly? We do not know the answer, but note down some observations regarding the problem.

The Problem

Given oracle access to a function computed by a feedforward neural network of known architecture with non-linearity and parameters , can we use an algorithm to find a such that ?

Oracle access means that we can propose queries and get back answers .

Observation 1: Solving this with SGD gives us a pretty good approximation

We can initialize randomly, feed it with random inputs, and use the loss function as a training signal to update with SGD. This will provide us with a pretty good approximation.

But our objective in this case is to find such that we recover the original function exactly, i.e. where we can prove that indeed .

Observation 2: It is trivial to solve this for -ary variables

Suppose the network computes binary variables, then every represents a boolean function. We can simply enumerate all possible combinations of inputs to the boolean function as a truth table, and find a which is satisfiable.

The same is true if the network computes -ary variables for any . This cannot be done for real variables though, because it is impossible to enumerate the entire input space of the boolean function.

Observation 3: This cannot be done in the limit.

The Universal Approximation Theorem says that a feedforward neural network with a single hidden layer that is sufficiently wide can approximate any continuous function on a compact subset of the real numbers, given a nonconstant, bounded, and continuous activation function (which does not really hold for ReLU).

In general, if we are trying to recover an arbitrary real function exactly, we will need an infinite number of queries.

Attempted Solution

When my advisor first posed this problem, my instinctual reaction was that it cannot be solved. After all, there is no general way of solving a non-linear equation analytically. Furthermore, if this can be done, why would we not simply use this method instead of SGD to optimize neural networks?

But upon thinking it through, it isn’t immediately clear that we cannot solve this, since this is not a general non-linear equation, but a very specific one with ReLUs. An algorithm for solving this also wouldn’t necessarily help in optimizing neural networks, since in general, we only have access to a set of labeled data points and not oracle access to the “optimal” function.

If we wanted to answer this problem in the negative, we would have to explain how the ability to solve this problem would make some known difficult problem easy. This seems like a complexity/learning/security theory question, which is beyond my current range of expertise. Many theoreticians also typically stick to boolean circuits instead of real-valued functions, so I wasn’t able to google for anything relevant to solving this problem, even though it’s highly likely that someone else has already posed this problem and answered it.

I wasn’t able to solve the problem in general, but I note down specific cases below where it is possible to find solutions.

We assume no biases, and start with the linear case.

Case 1:

We write as a matrix containing the queries as the column vectors. Then is a matrix containing the answers as column vectors, and we can recover the weights directly.

(If there were biases in the linear case, we can set to recover them directly.)

Case 2:

implies that the negative entries of will result in s in . thus recovers the previously zeroed out entries while zeroing out previously positive entries.

We write and .

Then, , by reduction to Case 1.

Case 3: All the weights are scalar.

Let’s start with the example of a two layer network: .

If is non-positive, then is just the zero function. Otherwise, we can determine the sign assignment of by assuming every possible sign assignment () and checking if it’s satisfiable with the sign assignments of and .

Knowing the sign of helps us remove the inner , and thus figure out the unique product . Then, our can be any values for that satisfy this unique product and the above sign assignment.

In general, . As above, if any of for is non-positive, then is just the zero function. Otherwise, we can determine the sign assignment of by assuming every possible sign assignment () and checking if it’s satisfiable with the sign assignments of and .

Knowing the sign of helps us remove the innermost , and thus figure out the unique product . Then, our will be any value assignment to that satisfies this unique product and the above sign assignment.

This general two-step process of figuring out the sign assignments and then the value assignments of the weights cannot be applied in general to matrix-shaped weights . This is because we can end up with cases where it is possible to establish that certain entries in a weight matrix have opposite signs, but not be able to tell exactly which entry is positive. This under-determination does not imply that we cannot solve the problem in general, since the weights we are trying to find are under-determined in the first place, i.e. there can be many such that .

Case 4:

This is where I got stuck and am not able to make further progress for now.

In practice, it seems like one ought to be able to write a program to enumerate various possible cases, say for dimension . But then, one also has to prove that this program recovers the such that exactly.