Saturday, August 6, 2016

AdaBoost 7: multi-class & unseen combinations

The basic idea behind the multi-class (and also multi-label, i.e. where each case may have more than one outcome) AdaBoost can be described as boosting the recognition of all the outcomes in parallel. It takes the "yes or no" dichotomies for all the outcomes, and on each round it tries to find such a partial hypothesis where the sum of Z for all of them is minimal. This is very similar to what was described in the part 6, where multiple ranges were added, each with its own confidence. The difference is in the formula for the final computation: in the multi-class version there is a separate formula for each class that uses only the confidence values that all the training rounds computed for this particular class.

There is also a possible optimization for the situation where there may be only one outcome per case (i.e. single-label), saying that the mapping between the dichotomies used in the logic above and the outcomes doesn't have to be one-to-one. Instead each outcomes can be mapped to a unique combination (or multiple combinations) of the dichotomies. The dichotomies can be selected on some smart way, say if we're trying to recognize the handwritten digits, they can be "pointy vs roundy", "a loop on the bottom vs no loop at the bottom" etc. Or just by just dividing the outcomes in half in some blind way, like "0,1,2,3,4 vs 5,6,7,8,9", "0,2,4,6,8 vs 1,3,5,7,9" etc.

Returning to the multi-label situation, one of the problems I've experienced with it is the ability to recognize the combinations of outcomes that weren't present in the training data. That is, the outcomes were present but none of the training cases had exactly this combination. For the basic diagnostics, this can be discounted by saying "but what's the percentage of such cases" but when you start pushing the quality of diagnosis around 95%, it turns out that great many of the remaining misdiagnosed cases fall into this category.

AdaBoost doesn't have any built-in solution for this problem. The solution it produces is only as good as the underlying algorithm. There is nothing in AdaBoost that puts the pressure on the underlying algorithm to recognize the combinations that aren't present in the training data. If the underlying algorithm can do it anyway (and perhaps despite the pressure from AdaBoost), the resulting combined formula will be able to do it too. If it can't then the combined formula won't either. The simple algorithms like the decision stumps can't.

But maybe some multi-pass schemes can be devised. Run the boosting once, get a set of the candidate symptoms (i.e. partial hypotheses). Use these symptoms on the training cases to try to differentiate, which symptom is related to which outcome. Then run the boosting the second time from scratch, only this time with the relevance knowledge mixed in: whenever a symptom that is close to an irrelevant one is tested on a training case, make it return "I don't know", i.e. the confidence 0.5. This will shift the choice of symptoms. Obviously, if using the resulting formula, the same pruning of irrelevance has to be applied there in the same way. The symptoms from the second pass can be re-tested for relevance, and if any change is found, the third pass can be made, and so on.

Or even better, perhaps this logic can be merged directly into each round of the underlying algorithm in one pass of AdaBoost: when a candidate partial hypothesis (i.e. symptom) is found, measure its relevance right there and change its Z-value accordingly. Pick the candidate that has the lowest Z-value even after it has been corrected for the relevance. Include the relevance information into the partial hypothesis.

No comments:

Post a Comment