I've got around to try the non-standard activation function I've described in the last post, I've named it "corner" function. And there is some bad news, some good news.
First, since I've added support for arbitrary activation functions to FloatNeuralNet, I've tried the "leaky ReLU" too. It's like ReLU but the left side has a small positive gradient instead of 0. On the weird task I've been playing with (simulation of a smooth quadratic function) it didn't work any better than normal ReLU.
Then I've tried my corner function, first just a single neuron. And the result was mixed. Sometimes it did work decently, sometimes it didn't. It didn't when the breaking point happened to be outside of my range of input values. Just as I suspected, there is no consistent force that adjusts the breaking point's position horizontally, If it gets into range of training data, it does something (not great either), but if it isn't in range to start with, it does pretty much nothing, and the function reverts to doing a linear approximation.
So I've tried to solve this empirically. I've thought of measuring the amounts of error on the left and right side of the breaking point. Then if we have more error on one side, this probably means that this side is more non-linear than the other side. And we could approximate it better if we had to deal with fewer sample points on the more non-linear side, so we have to move the breaking point toward that side, until some points shift to the other side and the balance returns. This sort of reminds me of AdaBoost that tries to use the result of a partial solver to balance the results by some criterion, but obviously it's not quite the same thing.
To give an example in beautiful ASCII art, in a situation like this, where the breaking point is marked with "@", the line produced by the neuron with "-" and the training points with "*":
* -- -- ---- * -- * ---- * -- * ---- -- * *@ *
shifting the breakin gpoint to the right would allow to do a better match on the right side without messing up the left side:
* - - ---- * - * ---- * - * ---- -* *---*- @
For the measure of the error we can use the absolute value of sigma (sigma is the error gradient propagating from the higher layer). It's absolute because we don't care if the point is above or below our computed value, that part would be sorted out by optimization of the other weights. And the thing that shifts the breaking point is the bias weight in the neuron. So if the value produced by neuron before activation is positive (i.e. on the right side from the breaking point), we can use the gradient with abs(sigma), if negative (on the left side) then with -abs(sigma).
The question is, how exactly to fold it into the gradient. Just using it as a gradient seems to have too strong an effect. Except when the neuron produces a signal that is always very close to 0, then it's not enough, because sigma becomes very small. Adding a small constant of the same sign solves this particular problem. But the rest of the time it still seems too strong. For that I've tried to take a fraction of the absolute sigma. Then I've tried to mix it with the traditional gradient. Both worked better, but then there is the question of what fraction to take and at what proportion to mix? I don't know yet how to figure it out, and I'm not sure that the empiric values transfer well between different examples. So the other way I've tried was to compute
gradient = output_before_activation * abs(sigma)
It seems to work decently well but it prioritizes the far away points over the near ones, and I'm not sure if this is the right thing to do, it's rather counter-intuitive.
With this, an one-neuron model works quite well. The problems start with more neurons - sometimes they do converge quite well, much better than what I've seen with ReLU, and sometimes not. One weird thing that happens is that the model goes converging quite well for a while, and then all of a sudden the error jumps up, then continues converging. Sometimes the training rate turns out to be too high, and the model explodes. Maybe those jumps in the other cases are also results of the training rate being too high at some points, which causes these jumps. Or maybe as the breaking point shifts and a training point ends up on a different side of it, that causes a strong jump. Or maybe both are connected, I don't know yet. Sometimes the training rate turns out to be too low - this generally happens when a neuron is driven to produce near 0, which tends to happen when the random seeding makes the V point upside down, and then as it tries to rights itself, it sometimes ends up with a combination of weights that shrinks the gradients. This is not a unique problem, this is something that can happen with any neural network. By the way, this time the variant that computes the gradient for the whole training pass before adjusting the model tends to be more stable than one that makes a step after each training case.
So where does it leave me? It looks like the whole thing can benefit from some momentum method to get out of the low-gradient situation, and from the auto-detection of the learning rate that is too high in the high-gradient situations. And maybe to think of some other way to prevent the wedging on 0s. The empiric solution for positioning the breaking point needs to be worked through the math, and maybe that would suggest a better solution too. And maybe the same way of adjusting the bias weight can be applied to the classic ReLU too (only the left side that generates 0 would probably have to use the sigma that didn't get zeroed by that yet).
No comments:
Post a Comment