I've done a plot of the errors with multiple variations of the FloatNeuralNet options, using the Leaky ReLU as the least common denominator. I've added some too: for example, I've never tried before to do the momentum method without my improvements, so I've done a custom build without them. The results are interesting, I should use more graphs in the future (and add a proper graph generation rather than scraping the debugging log).
First of all, none of non-stochastic implementations have done as well as the stochastic one. Well, they do better for the same training rate, but the catch is that the stochastic approach is stable at a much higher training rate. It works fine with a 100 times higher training rate for the same initial seed, and starts getting rough but still even faster at a 1000 times higher training rate.
The second observation is that without my changes the momentum method doesn't work at all. At all. It just breaks down right away.
The third observation is that my auto-adjustment of the training rate starts quite fast, somewhere between stochastic method's 100x and 1000x training rates but then flattens out and falls behind. So even as-is it's probably a valuable technique for quick short training.
As for why it flattens, my hypothesis is this: Imagining the optimization as a ball rolling down the hill along with the gradient, killing the momentum in the dimensions where the gradient keeps changing sign prevents the ball from wobbling in the trough and jumping out of it. It works essentially as a shock absorber. But what if the trough doesn't go predominantly in one dimension? What if it's going at a diagonal in multiple dimensions? Then the damping that prevents wobbling will also be slowing down the ball. This is I think what happens when the optimization slows down. Fortunately, the mathematicians have a way to find the dominant direction of the trough and rotate to it, the Principal Component Analysis does that. Then, I guess, the same momentum method can be applied to the rotated dimensions, and then rotated back. Unfortunately, this whole procedure is probably quite expensive. And as the ball rolls, the rotation will change, so the post-rotation dimensions will change, and it's not clear, how to match them between iterations. Maybe the rotations should be recomputed only infrequently, when the progress slows down: this will both amortize their costs and keep stability between iterations.
No comments:
Post a Comment