Monday, January 2, 2023

a quick test of theory about MNIST

 I've had this theory that the test set in MNIST just contains the digit images that are substantially different from the training set, and this is the reason for why they're not recognized well. I've come up with a quick way to test this theory: just merge the test set into the training set, and see if it make any difference.

It does. When the test set is included into training, it gets recognized very well. There still usually are a couple of images that have issues but that's down from about a hundred. I think it tells us in two ways that the test set contains different images:

(1) If they start getting recognized when trained on them, this means that the training set just doesn't train for the right thing.

(2) When the NN has a hard time even training on some images, this means that the abilities of the NN are getting stretched, that it doesn't have enough brain power to cope with all the different possibilities.

Then I've tried adding some brain-power to the NN. First I went back to the original 16x16 images instead of the shrunk 8x8 but kept the width of the NN layers the same. This grew the cost of the first layer 4-fold but the others stayed the same, so the code didn't slow down so much. This did help some, both with the original and with the mixed training set. But not spectacularly. Then I've also expanded the first layer of neurons 4-fold. This made things slower yet, and provided another improvement, but again nothing spectacular.

I think there are just too many ways to draw the digits - slant one or another stroke a little more or a little less, or use a thicker pen, and the digit suddenly has a stroke where it used to have a hole and the other way around, and the NN gets confused. Also, some digits turn out to be unexpectedly similar, such as 6 and 2. When we draw them by hand, they grow loops around the corners, and more loops where the pen doesn't quite lift between the digits. So both 6 and 2 and up looking as a vertical loop with an opening that's connected to a horizontal semi-loop. The only difference is in the direction: 6 has the vertical loop opening on the right and the horizontal loop opening on the left, while 2 has the vertical loop opening on the left and horizontal loop opening on the right. And the way the strokes shift around, it's easy for the NN to get confused.

I don't know how do they do the handwriting recognition in the reality. My guess is that some preprocessing of the images that extracts the topology of the strokes into a more explicit representation should help a lot. I guess it's also possible to generate more of the sample images for the training set by stretching the existing ones in different ways but that sounds like a dead end. A variation of it might be to use the batching again, but this time include only the images of the same digit into a batch.

No comments:

Post a Comment