Sunday, January 15, 2023

trapeze to btimap for MNIST

Continuing with the trapeze-based implementation of the handwritten images that I've described in the previous post, next I've tried to do a reverse transformation: from the set of trapezes to a bitmap that tries to preserve the significant features detected by the trapezes. And if you've read the previous post, you've probably can guess the result: it became a little worse yet!

Looking at the images that got misrecognized, I think  I understand why. Think for example of a "4" and "9". "4" is often drawn with an open top. But when we draw a "9", there is also sometimes a small opening in the top right corner. When I do the detection of trapezes, this little opening becomes a significant feature, and when I convert these trapezes back to a bitmap, this little opening becomes a larg-ish opening, so now "9" became more like "4"! I guess, with a large and representative enough training set, they could be differentiated well. But not if all the "9"s with an opening are in the test set (and yes, if I fold the test set into the training set, it gets recognized quite well).

While looking at it all, I've also noticed a few more interesting things. I've turned the classifier mode on again (and improved it a bit to make the gradients more stable), and noticed that it drives the error on the training set down a lot. The classifier mode tries to make sure that the training cases that produce the wrong result get more attention by multiplying their gradients, which is essentially equivalent to adding 99 more copies of the same training case. Turn the classifier mode on, and the training error goes down from 0.06 to 0.025 in almost no time. Which means that a lot of this error is caused by the few outliers. And maybe another way to fix it would be to take a larger power in the loss function, say the 4th power instead of square. But the error on the test set doesn't budge much, so maybe this is a misleading measure that doesn't matter much and can cause overfitting.

I've found a bug in my code that computed, how many cases got recognized with a very low confidence: if even with the right outcome the absolute value for the highest outcome was still below 0. Due to this bug, all the cases that were labeled with "0" were counted as low-confidence. So this measure wasn't 17.5% on the training set, it was more like 1%, and in the classifier mode goes to near 0. But on the MNIST test set, the total of misrecognized and low-confidence cases is still near 20%.

The auto-adjustment of the descent rate still works well, and I've bumped it up to be more aggressive: changed the step up from 1.1 to 1.2, and the step down from 0.1 to 0.2. Now I think it exhibits the behavior that I've seen with a higher manually-set rate, where once in a while the error would bump up a little, explore the surroundings, and then quickly drop down below the previous low. And the attempts to start the destructive resonance are still well-arrested.

I've added a printout of gradients by layer, and they do tend to vary in the interesting way, kind of "sloshing about". At the start of the training the high layers usually show the high gradients, and the low layers show the high gradients. But this gradually reverses, and after a few thousand training passes you get the high gradients in the low layers, low gradients in the high layers. Unless you then go and change the some training criteria, then the high layers get the higher gradients again until things settle. Which probably means that selecting separate training rates by layer might be worth a try. Or maybe even selecting them separately for each weight, like the Adam algorithm that I've mentioned before does (I haven't tried this specific algorithm yet).


Sunday, January 8, 2023

trapeze data representation for MNIST

 As I've told before, the trouble with MNIST data set is that the test set contains the cases that are substantially different from any case in the training set. So doing well on it requires some kind of generalization on the training set to be able to correctly recognize the test cases. One idea I've had is to parse the images into the logical parts that look like trapezes.

For example, consider an image of the digit 0. It can be broken somewhat like this:

_   /===\   _
   //   \\
_ //     \\ _
  \\     //
_  \\   //  _
    \===/  

Here going from the top we have a horizontal bar that can be seen as a horizontal band containing trapeze of whitespace, followed by a drawn trapeze, followed by another whitespace trapeze. I've separated it by the underbar marks on the sides. Then the sides go expanding: there is still whitespace on the left, then goes the bar expanding to the left, an expanding trapeze of whitespace in the middle, the bar expanding to the right, and another whitespace on the right. Then it goes symmetrically shrinking on the bottom side, and completes with another horizontal bar. There could also be a vertical part on the sides in the middle. Once we recognize the break-down like this, it doesn't matter any more, what exact size is the symbol 0, it's still the same symbol. It gets harder for the many handwritten variations but maybe we get enough samples to build a recognition of all the major ones. So I've decided to try this path wit MNIST.

Recognizing the trapezes in a strict black-and-white image is much easier than with the halftones. So the first thing I've doe was to generate a B&W image by filtering on a fixed level. I've done this, and since I've had it, I've tried to train a model on it. And it did a little worse than on the halftone image.

Well, the next simple approximation is to encode the rows of pixels as run-length. So I've done it, and I've tried to train a model, and guess what, it has done a little worse yet. Which is a bad sign. Part of the problem is that each row becomes of a different size in runs, and to keep the fixed size, the unused runs at the end have to be filled with something. Originally I've filled them with 0s, and then I've tried to fill them with -1. Which made thins a little better but still a little worse than plain B&W image.

At this point the indications were kind of bad but if I didn't try the trapezes, I wouldn't know, right? So I've tried. I've encoded each trapeze with 3 numbers: 

(1) The average slope coefficient of its left side, computed as dx/dy (since a vertical line is possible and a horizontal line is not). Which is also happens to be the slope of the right side of the previous trapeze. It's average because in reality the pixels represent the curved lines, and I just approximate them with the straight trapezes.

(2) The width of the top of the trapeze

(3) The width of the bottom of the trapeze.

To get the ordering of whitespaces and drawings right, I always start each row of trapezes and also end it with a whitespace, and whatever gets left unused gets filled with -1 for widths and 0 for slope.

There is also one exception: since the leftmost whitespace trapeze always has its left slope vertical (i.e. 0), I've reused that value to put the row height into it.

And the result? By now you've probably guessed it, it was a little worse yet (and a lot slower to compute each pass because of the increased input size) than the run-length version. So this representation has turned out to be not very good for generalization, although not terribly bad either. Maybe it can be twisted in some way to make it better. Maybe put it back into a bitmap by treating each trapeze as one pixel, but by now I'm rather pessimistic on these prospects :-)

While experimenting with it, I've tuned the autoRate2 coefficients to make them a little more aggressive, and they had proved themselves quite well. To check how my auto-scaled momentum measures against the classic stochastic descent, I've tried that one too, and the stochastic version did noticeably worse. I've also tried going to Leaky RELU activation, and that one did a little worse yet. So I think at least the descent part and my Corner activation function are working decently well.

Thursday, January 5, 2023

unmelting a neural network

I've got wondering, could a neural network that experienced a "meltdown" from a too-high training rate be restored? I've got such an example of MNIST and dumped out its contents. What happened inside is that pretty much all the weights got pushed to the values of 1 and -1, the neurons becoming very much the same. So when trying to train it again, this lack of, pardon my French, diversity, wouldn't let the optimization to progress far: it just optimizes this one neuron state per layer in many copies.

For some time I've had an idea of how this problem could be resolved: by a partial randomization of gradients. So I've done this with a simple saw-toothed "randomization" where each next weight gets its gradient reduced slightly more up to a limit, then it drops back down and the next "sawtooth" starts. The starting position of the teeth gets shifted by one on each training pass. 

I've started by "tweaking"  in this way 1% of the gradient size (I've named this option of FloatNeuralNet tweakRate_), and combined it with options momentum_ and autoRate2_. It did work some but barely made a scratch.  OK, so I've bumped it up to 30%. It did work much better but still not quite great. Over about 300K training passes it got to the mean-square error of about 0.15 (by comparison, the normal training from a random point gets to about 0.05 in 10K steps) and would not budge much any more. The verification on the test cases was much closer: mean-square error of about 0.24 instead of 0.19 and the error rate of about 7% instead of 5.5%. So it might be not that bad after all. The combination of the new auto-rate and momentum descent worked great at preventing another meltdown. Interestingly, at the start all the gradient was concentrated in the last layer of 3, and then it gradually shifted towards the first layer.

Then I've tried the same tweak rate of 0.3 for the training from a random initial state, and it didn't have any detrimental effect at all, even did slightly better than without it. So it should be safe to use in general case as a cheap preventative measure.

This also gave me an opportunity to look more into the tuning of the auto-rate algorithm, which I've made a bit better, and also look into what gradients are where. As it turns out, the highest gradient dimensions by far are at the weights that have reached the [-1, +1] boundary, and they skew the norm2 and mean-square of the gradient a lot. When I've changed the code to mark such gradients post-factum as 0, that gave me an opportunity to count them separately and to exclude them from the means. Their number grows early in the draining and then gradually reduces (but not to 0). 

How about if we raise the boundaries, this should reduce the number of the dimensions hitting them and make the training faster, right? And it also would be a good test of the new auto-rate algorithm, since as I've shown before the weights over 1 are much more susceptible to meltdowns. I've tried the boundaries of 10 and 100. The auto-rate worked great but the training got slower. For all I can tell, the higher weights trigger more often the situations where the auto-rate algorithm drops the training rate down, and the rate tends to be 10 to 100 times lower.

But the bad news for the auto-rate logic is that manually picking a just-high-enough training rate still ultimately produces a slightly better result. The auto-rate algorithm starts with a similar rate but then gradually drops it by about 3 orders of magnitude. And as I've been watching the mean-square errors pass by pass, I could see that they showed differently: the fixed-rate algorithm would periodically have the error grow and maybe even stay up a while but then drop lower than where it was before, the auto-rate algorithm tends to just chisel away at the error rate little by little, it still has the error grow a little periodically but squashes it very fast. So perhaps the conservatively low rate gets the function trapped in some local minimum, while the fixed rate breaks out of them (when it doesn't lead to a meltdown). If I let the auto-rate algorithm grow the rate more, and then drop when it gets out of control, it actually does worse. But maybe some better adaptivity could be devised. 

And/or maybe bring the stochastic descent back into the mix. I've been computing the full gradient because this way any kinds of postprocessing represent a relatively low overhead done once per training pass, doing the same after each training case would be very slow. But it's much more resistant to the too-high descent rate, and should be able to shake out of the local minimums better. So maybe they can be combined, doing a few passes stochastic then a few passes deterministic, and so on, with the rate computed at the deterministic passes fed to control the stochastic passes.

Monday, January 2, 2023

optimization 13 - using the gradient sign changes

 When I've previously experimented with FISTA momentum descent, one thing that worked quite well was detecting when some dimension of the gradient changes sign, and then resetting the momentum in this dimension to 0.

One of the typical problems of the momentum methods is that they tend to "circle the drain" around he minimum. Think of one of those coin collection bins for charity where you get a coin rolling down the trough, and then it circles the "gravity well" of the bin for quite a while before losing momentum and descending into the center hole. This happens because the speed (momentum) of the coin is initially directed at an angle from the minimum (the center hole). And the same happens with the momentum descent in optimization, the momentum usually develops at an angle, and just as the current point gets it close to the minimum, the momentum carries it by and away to the other side of the "well", where it will eventually reverse direction and come back, hopefully this time closer to the minimum. But there is a clear indication that we're getting carried past the minimum: the sign of the gradient in the direction where we're getting carried past changes. So if we kill the momentum in this direction at this point, we won't get carried past. It helps a good deal with the quicker convergence.

An overshot is not the only reason why a gradient's dimension might be changing sign. Our "virtual coin" might be rolling down a muti-dimensional trough,  oscillating a little left and right in this trough. But there killing the momentum wouldn't hurt either, it would just dampen these oscillations, which is also a good thing.

Which gave me idea that once we have this, there is no point to the gradual reduction of momentum that is embedded into FISTA through the parameter t. If the momentum reduction on overshot gets taken care off as described above, there is no point in shrinking the momentum otherwise.

And then I've thought of applying the same logic to estimating the training rate. I've been thinking about the Adam method that I've linked to in a recent post, and it doesn't really solve the issue with the training rate. It adjusts the rate between the different dimensions but it still has in it a constant that essentially represents the global training rate. In their example they had just come up with this constant empirically, but it would be different for different problems, and how do we find it? It's the same problem as finding the simple raining rate. The sign change detection to the rescue.

After all, what happens when the too-high rate starts tearing the model apart? The rate that is too high causes an algorithm step to overshoot the minimum. And not just overshoot but overshoot it so much that the gradient grows. So on the next step it overshoots back even more, and the gradient grows again, and so on, and so on, getting farther from the minimum on each step. And the momentum methods tend to exacerbate this problem.

So if we detect the dimensions that change sign, and see if the gradient in them grows, and by how much, and how it compares with the gradient in the dimensions that didn't change sign, we'd be able to detect the starting resonance and dampen it by reducing the training rate. I've tried it, and it works very well, check out https://sourceforge.net/p/triceps/code/1792/tree/, it's the FloatNeuralNet option autoRate2_. So far the tunables for it are hardcoded, and I think that I've set them a bit too conservatively, but all together it works very well, producing a little faster training than I've seen before, and without the meltdowns. 

Another thing I've changed in the current version is the logic that pushes the rarely-seen unusual cases to be boosted for a better recognition. It previously didn't work well with the momentum methods, because it was changing the direction of the gradient drastically between the passes. I've changed it to make the boosting more persistent between the passes, and instead of shrinking the gradients of the correct cases, to gradually grow the representation of the incorrect cases, expanding their gradients. It's still a work in progress but looks promising.

Oh, and BTW, one thing that didn't work out was the attempt to boost similarly the cases that give the correct answer by having the highest output point to the right digit but do this at a very low confidence, so that even the best output is below 0 (and sometimes substantially below 0, something like -0.95). All I could do was shrink the percentage of such cases slightly, from about 17.5% to about 16.5%. And I'm  not sure what can be done about it. I guess it's just another manifestation of a great variability of handwritten characters. Maybe it could be solved by vastly growing the model size and the training set, but even if it could, it would be nice to find some smarter way. Perhaps a better topological representation of the digits instead of a plain bitmap would do the trick but I don't know how to do it. One of the theories I've had was that it's caused by a natural trend towards negative numbers, because in each training case we have one output with 1 and nine outputs with -1. So it we changed the negatives to say -0.1, that would pull the numbers higher. But that's not a solution either, it just moves the average up, diluting strength of the negatives.

a quick test of theory about MNIST

 I've had this theory that the test set in MNIST just contains the digit images that are substantially different from the training set, and this is the reason for why they're not recognized well. I've come up with a quick way to test this theory: just merge the test set into the training set, and see if it make any difference.

It does. When the test set is included into training, it gets recognized very well. There still usually are a couple of images that have issues but that's down from about a hundred. I think it tells us in two ways that the test set contains different images:

(1) If they start getting recognized when trained on them, this means that the training set just doesn't train for the right thing.

(2) When the NN has a hard time even training on some images, this means that the abilities of the NN are getting stretched, that it doesn't have enough brain power to cope with all the different possibilities.

Then I've tried adding some brain-power to the NN. First I went back to the original 16x16 images instead of the shrunk 8x8 but kept the width of the NN layers the same. This grew the cost of the first layer 4-fold but the others stayed the same, so the code didn't slow down so much. This did help some, both with the original and with the mixed training set. But not spectacularly. Then I've also expanded the first layer of neurons 4-fold. This made things slower yet, and provided another improvement, but again nothing spectacular.

I think there are just too many ways to draw the digits - slant one or another stroke a little more or a little less, or use a thicker pen, and the digit suddenly has a stroke where it used to have a hole and the other way around, and the NN gets confused. Also, some digits turn out to be unexpectedly similar, such as 6 and 2. When we draw them by hand, they grow loops around the corners, and more loops where the pen doesn't quite lift between the digits. So both 6 and 2 and up looking as a vertical loop with an opening that's connected to a horizontal semi-loop. The only difference is in the direction: 6 has the vertical loop opening on the right and the horizontal loop opening on the left, while 2 has the vertical loop opening on the left and horizontal loop opening on the right. And the way the strokes shift around, it's easy for the NN to get confused.

I don't know how do they do the handwriting recognition in the reality. My guess is that some preprocessing of the images that extracts the topology of the strokes into a more explicit representation should help a lot. I guess it's also possible to generate more of the sample images for the training set by stretching the existing ones in different ways but that sounds like a dead end. A variation of it might be to use the batching again, but this time include only the images of the same digit into a batch.