Saturday, November 26, 2022

optimization 9 - Lipschitz constant and earlier layers of neural networks

 I've tried to follow through with the path described in https://babkin-cep.blogspot.com/2022/10/optimization-8-descent-rate-and-v.html, find the Lipschitz constant for the neural network to both try to make the neural network converge faster and to avoid overshooting badly when the weights at the neurons grow larger than 1. In that previous post I've looked at the last layer, so then I've figured that I should look at the previous layers: since they don't have a square, they might be easier. Or maybe not: they feed into all the following layers.

But maybe we can make them easier. We can notice that in the original formula for gradient descent every step is multiplied by the descent rate. So maybe we can just pull that descent rate factor and "put it outside the parentheses". Which means that when we propagate the gradient from the last layer backwards, we can multiply it once by the descent rate, and that would allow us to skip multiplying by it everywhere else (which is also an optimization in the software sense!). The result of this multiplication (using r for the descent rate and v for the activated output of a previous layer's neuron) 

m = r * df/dv

also has can be seen as having a "physical meaning" that says "I want to shift the output v of the previous-layer neuron by this much to achieve the next value of v" (here t is the index of the training pass): 

v[t+1] = v[t] - m. 

In reality, of course, the output v feeds multiple neurons of the previous layer, and these feedbacks get added together. But that can be folded into the same physical interpretation too: if this output feeds into N neurons, we can say that

r = R / N

where R is some "basic descent rate", and so

v[t+1] = v[t] - R * sum{i}(df[i]/dv) / N

That is, the addition turns into averaging with a different rate R. 

But wait, there also are multiple training cases (K of them), the gradients of which also get added up. That's not a problem, because the outputs from the different training cases also get added up.

An interesting corollary from this seems to be that when a neural network has a different number of neurons per layer, the descent rate should be normalized to the number of outputs, and the layers that have fewer outputs can use a higher descent rate.But I suppose there is also a counter-argument that if there are many outputs, the feedback from them is more likely to be contradictory, so the sum is more likely to be closer to 0 rather than all pointing in the same direction, so a higher rate may be useful there. That's probably why nobody bothers with adjusting the descent rate per layer.

So all right, let's build a simple example. Suppose we have a ReLU-like activation function with gradient of 1 and a neuron with 2 inputs and an offset weight

v = u[1] y[1] + u[2] y[2] + b

And since the training cases are added up linearly, we can just start with an example with one training case. Then we could write the function to minimize as

v = u[1] y[1] + u[2] y[2] +b - m

(getting ahead, this expression turns out to be wrong, but I want to get gradually to why it's wrong). The partial derivatives will be

dv/du[i] = y[i]

dv/dy[i] = u[i]

dv/db = 1

and computing at two points and finding the difference in the dimensions of gradients and arguments:

L^2 = ( (y[1, 1] - y[1, 0])^2 + (u[1, 1] - u[1, 0])^2 + (y[2, 1] - y[2, 0])^2 + (u[2, 1] - u[2, 0])^2 + (1-1)^2) /
  ( (u[1, 1] - u[1, 0])^2 + (y[1, 1] - y[1, 0])^2 + (u[2, 1] - u[2, 0])^2 + (y[2, 1] - y[2, 0])^2 + (b[1] - b[0]^2)
  <= 1

because the lower side of the fraction has all the same terms for u and y (just in different order), and an extra positive term for b. So (1/L) >= 1, and propagating the same rate of descent or more should be what we need.

To test this conclusion, I've made up a small example with concrete numbers and did what the TFOCS library does, do some steps and compute the approximation of L by dividing the difference of gradient between the steps by the difference of weights. And... it didn't work out. Not only it produced L > 1, but I'd go back and adjust the step to be smaller, and the new L would be even larger, and I'd adjust again to an even smaller step, and L will get even larger than that again.

What went wrong? I've got the formula wrong. Looking at the expression

v = u[1] y[1] + u[2] y[2] + b

it obviously has the minimum where all the variables are 0. So yeah, if we use L=1 and step straight to 0, that would minimize it. But we don't want to get to 0, we want it to get to some value (v[t-1] - m). The right formula for minimization should be

abs( u[1] y[1] + u[2] y[2] +b - (v[t-1] - m) )

The absolute value part was missing. And I don't know how to compute L with the absolute value. I guess one way to replace it would be to substitute a square for absolute value:

( u[1] y[1] + u[2] y[2] +b - (v[t-1] - m) )^2

It can probably be done just for the computation of L, even when the neural network itself stays without squares. But I'm not sure. 

Another interesting observation is what happened when my steps in the test have overshot the empirically computed L: they would flip the sign of the weights. Which means that the occasional overshoots are important, without them the signs would stay as they had been initially assigned by the random seeding. Even if we could compute and properly use the right value of L, we might not want to, because we'd lose the sign flips.

No comments:

Post a Comment