Sunday, January 15, 2023

trapeze to btimap for MNIST

Continuing with the trapeze-based implementation of the handwritten images that I've described in the previous post, next I've tried to do a reverse transformation: from the set of trapezes to a bitmap that tries to preserve the significant features detected by the trapezes. And if you've read the previous post, you've probably can guess the result: it became a little worse yet!

Looking at the images that got misrecognized, I think  I understand why. Think for example of a "4" and "9". "4" is often drawn with an open top. But when we draw a "9", there is also sometimes a small opening in the top right corner. When I do the detection of trapezes, this little opening becomes a significant feature, and when I convert these trapezes back to a bitmap, this little opening becomes a larg-ish opening, so now "9" became more like "4"! I guess, with a large and representative enough training set, they could be differentiated well. But not if all the "9"s with an opening are in the test set (and yes, if I fold the test set into the training set, it gets recognized quite well).

While looking at it all, I've also noticed a few more interesting things. I've turned the classifier mode on again (and improved it a bit to make the gradients more stable), and noticed that it drives the error on the training set down a lot. The classifier mode tries to make sure that the training cases that produce the wrong result get more attention by multiplying their gradients, which is essentially equivalent to adding 99 more copies of the same training case. Turn the classifier mode on, and the training error goes down from 0.06 to 0.025 in almost no time. Which means that a lot of this error is caused by the few outliers. And maybe another way to fix it would be to take a larger power in the loss function, say the 4th power instead of square. But the error on the test set doesn't budge much, so maybe this is a misleading measure that doesn't matter much and can cause overfitting.

I've found a bug in my code that computed, how many cases got recognized with a very low confidence: if even with the right outcome the absolute value for the highest outcome was still below 0. Due to this bug, all the cases that were labeled with "0" were counted as low-confidence. So this measure wasn't 17.5% on the training set, it was more like 1%, and in the classifier mode goes to near 0. But on the MNIST test set, the total of misrecognized and low-confidence cases is still near 20%.

The auto-adjustment of the descent rate still works well, and I've bumped it up to be more aggressive: changed the step up from 1.1 to 1.2, and the step down from 0.1 to 0.2. Now I think it exhibits the behavior that I've seen with a higher manually-set rate, where once in a while the error would bump up a little, explore the surroundings, and then quickly drop down below the previous low. And the attempts to start the destructive resonance are still well-arrested.

I've added a printout of gradients by layer, and they do tend to vary in the interesting way, kind of "sloshing about". At the start of the training the high layers usually show the high gradients, and the low layers show the high gradients. But this gradually reverses, and after a few thousand training passes you get the high gradients in the low layers, low gradients in the high layers. Unless you then go and change the some training criteria, then the high layers get the higher gradients again until things settle. Which probably means that selecting separate training rates by layer might be worth a try. Or maybe even selecting them separately for each weight, like the Adam algorithm that I've mentioned before does (I haven't tried this specific algorithm yet).


No comments:

Post a Comment