I've got wondering, could a neural network that experienced a "meltdown" from a too-high training rate be restored? I've got such an example of MNIST and dumped out its contents. What happened inside is that pretty much all the weights got pushed to the values of 1 and -1, the neurons becoming very much the same. So when trying to train it again, this lack of, pardon my French, diversity, wouldn't let the optimization to progress far: it just optimizes this one neuron state per layer in many copies.
For some time I've had an idea of how this problem could be resolved: by a partial randomization of gradients. So I've done this with a simple saw-toothed "randomization" where each next weight gets its gradient reduced slightly more up to a limit, then it drops back down and the next "sawtooth" starts. The starting position of the teeth gets shifted by one on each training pass.
I've started by "tweaking" in this way 1% of the gradient size (I've named this option of FloatNeuralNet tweakRate_), and combined it with options momentum_ and autoRate2_. It did work some but barely made a scratch. OK, so I've bumped it up to 30%. It did work much better but still not quite great. Over about 300K training passes it got to the mean-square error of about 0.15 (by comparison, the normal training from a random point gets to about 0.05 in 10K steps) and would not budge much any more. The verification on the test cases was much closer: mean-square error of about 0.24 instead of 0.19 and the error rate of about 7% instead of 5.5%. So it might be not that bad after all. The combination of the new auto-rate and momentum descent worked great at preventing another meltdown. Interestingly, at the start all the gradient was concentrated in the last layer of 3, and then it gradually shifted towards the first layer.
Then I've tried the same tweak rate of 0.3 for the training from a random initial state, and it didn't have any detrimental effect at all, even did slightly better than without it. So it should be safe to use in general case as a cheap preventative measure.
This also gave me an opportunity to look more into the tuning of the auto-rate algorithm, which I've made a bit better, and also look into what gradients are where. As it turns out, the highest gradient dimensions by far are at the weights that have reached the [-1, +1] boundary, and they skew the norm2 and mean-square of the gradient a lot. When I've changed the code to mark such gradients post-factum as 0, that gave me an opportunity to count them separately and to exclude them from the means. Their number grows early in the draining and then gradually reduces (but not to 0).
How about if we raise the boundaries, this should reduce the number of the dimensions hitting them and make the training faster, right? And it also would be a good test of the new auto-rate algorithm, since as I've shown before the weights over 1 are much more susceptible to meltdowns. I've tried the boundaries of 10 and 100. The auto-rate worked great but the training got slower. For all I can tell, the higher weights trigger more often the situations where the auto-rate algorithm drops the training rate down, and the rate tends to be 10 to 100 times lower.
But the bad news for the auto-rate logic is that manually picking a just-high-enough training rate still ultimately produces a slightly better result. The auto-rate algorithm starts with a similar rate but then gradually drops it by about 3 orders of magnitude. And as I've been watching the mean-square errors pass by pass, I could see that they showed differently: the fixed-rate algorithm would periodically have the error grow and maybe even stay up a while but then drop lower than where it was before, the auto-rate algorithm tends to just chisel away at the error rate little by little, it still has the error grow a little periodically but squashes it very fast. So perhaps the conservatively low rate gets the function trapped in some local minimum, while the fixed rate breaks out of them (when it doesn't lead to a meltdown). If I let the auto-rate algorithm grow the rate more, and then drop when it gets out of control, it actually does worse. But maybe some better adaptivity could be devised.
And/or maybe bring the stochastic descent back into the mix. I've been computing the full gradient because this way any kinds of postprocessing represent a relatively low overhead done once per training pass, doing the same after each training case would be very slow. But it's much more resistant to the too-high descent rate, and should be able to shake out of the local minimums better. So maybe they can be combined, doing a few passes stochastic then a few passes deterministic, and so on, with the rate computed at the deterministic passes fed to control the stochastic passes.
No comments:
Post a Comment