After, as described in the last post, the empirical computation of Lipschitz constant for the lower layers of a neural network didn't pan out, I've tried doing it for the last layer. The last layer minimizes a honest squared function, so it should not have the same issues as with the previous layers. And I've done it in code, see https://sourceforge.net/p/triceps/code/1770/tree/.
It computes 1/L empirically like the TFOCS library: keeps the weights and gradients from the last pass (including, by the way, the inputs to the last layer too, since they are variables in the formula too), finds the differences, computes the norms and eventually 1/L, and compares it with the descent rate used for the pass. If there is room, it gradually grows the rate. If 1/L had shrunk and the last rate was too high, it reduces the rate and redoes the previous step (this is somewhat tricky, as the gradient is computed only in the following step, but I've got the rollback to work). Most of the time one rollback is enough to set things straight. But if not, well, it's just one overshoot that would get straightened in the following steps, so I've limited the rollback to be done no more than once in any case.
The results are rather ambivalent. If the whole model consists of one Corner-activated neuron, it works great, finds the right rate, and converges quickly. The extra overhead is definitely worth the trouble there. Though interestingly, it first finds a solution with a slightly lower squared error, and then as it does more steps, the gradient reduces but the squared error grows a little. My guess is that it's probably caused by the adjustment to the Corner's offset weight that is not tied directly to the error, and what it thinks is the best balance doesn't produce the lowest error. Something to consider in the future.
With more neurons, things go worse. One aspect that definitely works well is preventing the runaway growth of the weights to infinity. I've tried an example that experienced such a growth before, and it didn't diverge any more, even though it produced some weights around 50. But the trouble is that the found descent rate is way too high. It works much better if I limit the rate to about 1/10 of the computed. It descends there rapidly but then gets stuck circling with about the same error. Reducing the rate to 1/100 makes it converge slower but gets to a lower error. 1/1000 produces an even lower error. Which means that it gets the rate wrong, and just deciding on the descent rate from the last layer is not enough, the lower layers also need to be taken into account. The idea with estimating the error in the previous layer from both gradient and descent rate of its successor should work here but it might involve a good deal of overhead.Needs more thinking.
On the ReLU and Leakey ReLU it works very similarly but doesn't resolve the issue with centering the offsets, and the rate overestimation issue is particularly sensitive for the basic ReLU, since there it can easily lead to neurons becoming dead.
So I've thought alright, maybe the overshoots are caused by the weights flipping the sign too easily with the larger rate. So I've added the idea of "weight floor": defining a small number (let's call it r) as the floor and making the values (-r < weight < r) illegal for the weights. Then if the descent tries to push a weight towards 0, crossing the +-r boundary, we'd stop it at the appropriate r or -r for this pass. If on the next pass the descent keeps pushing the weight further in the same direction, we'd "teleport" it from r to -r or the other way around from -r to r. This would prevent the large jumps, and avoid getting stuck at 0, and give a chance for a new gradient to be computed before going further, so that if it was an overshoot, it would get a chance to go back without causing a change of sign. And if different weights end up moved at a different rate, it shouldn't matter much, because they are still moved in the right direction. Overall it works, and the "teleports" do happen occasionally. But it didn't resolve the overshooting issue. I guess the trouble is mostly with the overshoots that don't cause a sign flip.
No comments:
Post a Comment