Sunday, November 27, 2022

optimization 10 - Lipschitz costant and the last layer of neural networks

 After, as described in the last post, the empirical computation of Lipschitz constant for the lower layers of a neural network didn't pan out, I've tried doing it for the last layer. The last layer minimizes a honest squared function, so it should not have the same issues as with the previous layers. And I've done it in code, see https://sourceforge.net/p/triceps/code/1770/tree/

It computes 1/L empirically like the TFOCS library: keeps the weights and gradients from the last pass (including, by the way, the inputs to the last layer too, since they are variables in the formula too), finds the differences, computes the norms and eventually 1/L, and compares it with the descent rate used for the pass. If there is room, it gradually grows the rate. If 1/L had shrunk and the last rate was too high, it reduces the rate and redoes the previous step (this is somewhat tricky, as the gradient is computed only in the following step, but I've got the rollback to work). Most of the time one rollback is enough to set things straight. But if not, well, it's just one overshoot that would get straightened in the following steps, so I've limited the rollback to be done no more than once in any case.

The results are rather ambivalent. If the whole model consists of one Corner-activated neuron, it works great, finds the right rate, and converges quickly. The extra overhead is definitely worth the trouble there. Though interestingly, it first finds a solution with a slightly lower squared error, and then as it does more steps, the gradient reduces but the squared error grows a little. My guess is that it's probably caused by the adjustment to the Corner's offset weight that is not tied directly to the error, and what it thinks is the best balance doesn't produce the lowest error. Something to consider in the future.

With more neurons, things go worse. One aspect that definitely works well is preventing the runaway growth of the weights to infinity. I've tried an example that experienced such a growth before, and it didn't diverge any more, even though it produced some weights around 50. But the trouble is that the found descent rate is way too high. It works much better if I limit the rate to about 1/10 of the computed. It descends there rapidly but then gets stuck circling with about the same error. Reducing the rate to 1/100 makes it converge slower but gets to a lower error. 1/1000 produces an even lower error. Which means that it gets the rate wrong, and just deciding on the descent rate from the last layer is not enough, the lower layers also need to be taken into account. The idea with estimating the error in the previous layer from both gradient and descent rate of its successor should work here but it might involve a good deal of overhead.Needs more thinking. 

On the ReLU and Leakey ReLU it works very similarly but doesn't resolve the issue with centering the offsets, and the rate overestimation issue is particularly sensitive for the basic ReLU, since there it can easily lead to neurons becoming dead.

So I've thought alright, maybe the overshoots are caused by the weights flipping the sign too easily with the larger rate. So I've added the idea of "weight floor": defining a small number (let's call it r) as the floor and making the values (-r < weight < r) illegal for the weights. Then if the descent tries to push a weight towards 0, crossing the +-r boundary, we'd stop it at the appropriate r or -r for this pass. If on the next pass the descent keeps pushing the weight further in the same direction, we'd "teleport" it from r to -r or the other way around from -r to r. This would prevent the large jumps, and avoid getting stuck at 0, and give a chance for a new gradient to be computed before going further, so that if it was an overshoot, it would get a chance to go back without causing a change of sign.  And if different weights end up moved at a different rate, it shouldn't matter much, because they are still moved in the right direction. Overall it works, and the "teleports" do happen occasionally.  But it didn't resolve the overshooting issue. I guess the trouble is mostly with the overshoots that don't cause a sign flip.

Saturday, November 26, 2022

optimization 9 - Lipschitz constant and earlier layers of neural networks

 I've tried to follow through with the path described in https://babkin-cep.blogspot.com/2022/10/optimization-8-descent-rate-and-v.html, find the Lipschitz constant for the neural network to both try to make the neural network converge faster and to avoid overshooting badly when the weights at the neurons grow larger than 1. In that previous post I've looked at the last layer, so then I've figured that I should look at the previous layers: since they don't have a square, they might be easier. Or maybe not: they feed into all the following layers.

But maybe we can make them easier. We can notice that in the original formula for gradient descent every step is multiplied by the descent rate. So maybe we can just pull that descent rate factor and "put it outside the parentheses". Which means that when we propagate the gradient from the last layer backwards, we can multiply it once by the descent rate, and that would allow us to skip multiplying by it everywhere else (which is also an optimization in the software sense!). The result of this multiplication (using r for the descent rate and v for the activated output of a previous layer's neuron) 

m = r * df/dv

also has can be seen as having a "physical meaning" that says "I want to shift the output v of the previous-layer neuron by this much to achieve the next value of v" (here t is the index of the training pass): 

v[t+1] = v[t] - m. 

In reality, of course, the output v feeds multiple neurons of the previous layer, and these feedbacks get added together. But that can be folded into the same physical interpretation too: if this output feeds into N neurons, we can say that

r = R / N

where R is some "basic descent rate", and so

v[t+1] = v[t] - R * sum{i}(df[i]/dv) / N

That is, the addition turns into averaging with a different rate R. 

But wait, there also are multiple training cases (K of them), the gradients of which also get added up. That's not a problem, because the outputs from the different training cases also get added up.

An interesting corollary from this seems to be that when a neural network has a different number of neurons per layer, the descent rate should be normalized to the number of outputs, and the layers that have fewer outputs can use a higher descent rate.But I suppose there is also a counter-argument that if there are many outputs, the feedback from them is more likely to be contradictory, so the sum is more likely to be closer to 0 rather than all pointing in the same direction, so a higher rate may be useful there. That's probably why nobody bothers with adjusting the descent rate per layer.

So all right, let's build a simple example. Suppose we have a ReLU-like activation function with gradient of 1 and a neuron with 2 inputs and an offset weight

v = u[1] y[1] + u[2] y[2] + b

And since the training cases are added up linearly, we can just start with an example with one training case. Then we could write the function to minimize as

v = u[1] y[1] + u[2] y[2] +b - m

(getting ahead, this expression turns out to be wrong, but I want to get gradually to why it's wrong). The partial derivatives will be

dv/du[i] = y[i]

dv/dy[i] = u[i]

dv/db = 1

and computing at two points and finding the difference in the dimensions of gradients and arguments:

L^2 = ( (y[1, 1] - y[1, 0])^2 + (u[1, 1] - u[1, 0])^2 + (y[2, 1] - y[2, 0])^2 + (u[2, 1] - u[2, 0])^2 + (1-1)^2) /
  ( (u[1, 1] - u[1, 0])^2 + (y[1, 1] - y[1, 0])^2 + (u[2, 1] - u[2, 0])^2 + (y[2, 1] - y[2, 0])^2 + (b[1] - b[0]^2)
  <= 1

because the lower side of the fraction has all the same terms for u and y (just in different order), and an extra positive term for b. So (1/L) >= 1, and propagating the same rate of descent or more should be what we need.

To test this conclusion, I've made up a small example with concrete numbers and did what the TFOCS library does, do some steps and compute the approximation of L by dividing the difference of gradient between the steps by the difference of weights. And... it didn't work out. Not only it produced L > 1, but I'd go back and adjust the step to be smaller, and the new L would be even larger, and I'd adjust again to an even smaller step, and L will get even larger than that again.

What went wrong? I've got the formula wrong. Looking at the expression

v = u[1] y[1] + u[2] y[2] + b

it obviously has the minimum where all the variables are 0. So yeah, if we use L=1 and step straight to 0, that would minimize it. But we don't want to get to 0, we want it to get to some value (v[t-1] - m). The right formula for minimization should be

abs( u[1] y[1] + u[2] y[2] +b - (v[t-1] - m) )

The absolute value part was missing. And I don't know how to compute L with the absolute value. I guess one way to replace it would be to substitute a square for absolute value:

( u[1] y[1] + u[2] y[2] +b - (v[t-1] - m) )^2

It can probably be done just for the computation of L, even when the neural network itself stays without squares. But I'm not sure. 

Another interesting observation is what happened when my steps in the test have overshot the empirically computed L: they would flip the sign of the weights. Which means that the occasional overshoots are important, without them the signs would stay as they had been initially assigned by the random seeding. Even if we could compute and properly use the right value of L, we might not want to, because we'd lose the sign flips.

Thursday, November 3, 2022

V-shaped activation functions what-ifs

I've noticed that a simulation of the square function with a single neuron plus CORNER (V-shaped) activation function got worse after I've clamped the weights to the range [-1, 1]. It just couldn't produce a sufficient derivative to go to the full slope of the quadratic function, so it defaulted to the same shape as ReLU, going at slope 1 for the whole range of inputs [0, 1]. How about if I increase the allowed range of weights to [-10, 10]? Much better, then it could simulate the bend, producing the total maximum slope of about 1.5. So it looks like being able to go to the higher weights might be useful, just needs to be accompanied by the dynamic computation of L and the training rate from it.

Next, it looked like the right-side coefficient of the V can be fixed at 1, since the slope produced by the neuron itself would also work to produce the same effect. As it turns out, this doesn't work well. The trouble happens when the right side needs to change sign. When the right side of the V adjusts, it changes the sign easily. But without that, changing signs in the weights of the neuron itself doesn't work well, it often gets stuck. So this change doesn't work well. It's kind of interesting that when there is one variable, it changes the sign easily, if there are two, they get stuck, but if there are three, they change signs easily again. Though the real reason is probably that two variables are pushing against each other by using each other's weights as gradients, while the third variable doesn't, it just gets driven from the top. So maybe if one of two variables would be just driven from the top, that would prevent two variables from getting stuck too. Will need to think about that.

Next, what if we do a common Leaky ReLU but adjust the offset using the same formula as with the CORNER? (Leaky ReLU has to be used because with plain ReLU the gradients on the left side would be lost, and the computation for he offset won't work). This worked about as bad as normal Leaky ReLU, and maybe even worse, so it's definitely not a solution on its own.

So it looks like the CORNER function does have value. And maybe the right-side coefficient can be rid of if the main neuron weights can be made to change sign more actively, perhaps by making one of them fed the gradient of the top, or by applying the same gradient to them in both ways.

Wednesday, November 2, 2022

demo build target

 My experimental code for FloatNeuralNet has grown kind of large, and it runs kind of slow with valgrind, which interferes with the normal tests. So I've added a new entity in the C++ code: "demo", and moved all the slower experimental examples there. They are just like tests, only sit in the "demo" subdirectories, in the files starting with "d_". The target to build them is "mkdemo", to run without valgrind "demo" or "qdemo", with valgrind "vdemo" (this is different from the tests where the target "test" runs with valgrind).

BTW, the target "mktest" didn't run the library build as a dependency. It does now, in the main cpp directory (but not in the subdirectories). And the same applies to "mkdemo".