Thursday, November 3, 2022

V-shaped activation functions what-ifs

I've noticed that a simulation of the square function with a single neuron plus CORNER (V-shaped) activation function got worse after I've clamped the weights to the range [-1, 1]. It just couldn't produce a sufficient derivative to go to the full slope of the quadratic function, so it defaulted to the same shape as ReLU, going at slope 1 for the whole range of inputs [0, 1]. How about if I increase the allowed range of weights to [-10, 10]? Much better, then it could simulate the bend, producing the total maximum slope of about 1.5. So it looks like being able to go to the higher weights might be useful, just needs to be accompanied by the dynamic computation of L and the training rate from it.

Next, it looked like the right-side coefficient of the V can be fixed at 1, since the slope produced by the neuron itself would also work to produce the same effect. As it turns out, this doesn't work well. The trouble happens when the right side needs to change sign. When the right side of the V adjusts, it changes the sign easily. But without that, changing signs in the weights of the neuron itself doesn't work well, it often gets stuck. So this change doesn't work well. It's kind of interesting that when there is one variable, it changes the sign easily, if there are two, they get stuck, but if there are three, they change signs easily again. Though the real reason is probably that two variables are pushing against each other by using each other's weights as gradients, while the third variable doesn't, it just gets driven from the top. So maybe if one of two variables would be just driven from the top, that would prevent two variables from getting stuck too. Will need to think about that.

Next, what if we do a common Leaky ReLU but adjust the offset using the same formula as with the CORNER? (Leaky ReLU has to be used because with plain ReLU the gradients on the left side would be lost, and the computation for he offset won't work). This worked about as bad as normal Leaky ReLU, and maybe even worse, so it's definitely not a solution on its own.

So it looks like the CORNER function does have value. And maybe the right-side coefficient can be rid of if the main neuron weights can be made to change sign more actively, perhaps by making one of them fed the gradient of the top, or by applying the same gradient to them in both ways.

No comments:

Post a Comment