Tuesday, June 28, 2016

AdaBoost 5: the square root and the confidence value

The question of what should be used as the confidence value for Bayesian computations is a thorny one. In my previous take on it I've used the value without a square root, while the classic AdaBoost formula uses the one with the square root. Which one is more right?

Both of them produce the same decision for the classification of the cases. The difference is in the absolute value of the chances, or in the AdaBoost terms, the margin (with the classic AdaBoost also applying the logarithm on top).

I still think that using the value without the square root has the more correct "physical meaning". But either can be used with the same result.

Following the classic AdaBoost, the version of the confidence with the square root follows from the expression

Z = sum for all good cases i (Wi / S) + sum for all bad cases j (Wj * S)
Z = sum for all good cases i (Wi * (C-1) / C) + sum for all bad cases j (Wj * C / (C-1))

Just like the last post, W is the weight of a particular training case, same as D(i) in the classic AdaBoost terms, just since I've been using weights rather than a distribution, the letter W comes easier to my mind.

The "physical meaning" here is that the weights of the training cases for the next round of boosting are adjusted opposite to how the chances of these training cases get affected by this round during the final computation. If a training case gets predicted correctly by a partial hypothesis, its weight will get multiplied by C and the weight of the other cases will be multiplied by (1-C), so the changes will get multiplied by C/(1-C), and for the next round of boosting the good cases get divided by that to compensate. For the cases that get predicted incorrectly, the opposite happens.

This translates to

C / (1-C) = S = sqrt((1-e) / e) = sqrt(1-e) / sqrt(e)

The last expression does NOT mean

C = sqrt(1-e)

Instead, it means

C = sqrt(1-e) / (sqrt(1-e) + sqrt(e))

Alternatively, the approach without the square root starts with the premise

Z = sum for all good cases i (Wi / C) + sum for all bad cases j (Wj / (1-C))

The "physical meaning" is as described before, the weights of the training cases for the next round of boosting are adjusted opposite to how the weights of these training cases get affected by this round during the final computation. It seems to me that compensating the weights for the changes in weights is the more correct "physical meaning" than compensating the weights for the changes in chances.

This translates to

C / (1-C) = S2 = sqrt( (1-e) / e )2 = (1-e) / e

and

C = (1-e)

By the way, chasing this version through the derivatives as shown in the previous post was interesting. I've appreciated why the authors of AdaBoost involved the exponents into the formulas: doing the derivatives and integrals with the exponents is much easier than without them. Then I've realized that with the derivatives where S = exp(Alpha) I'm computing dZ/dAlpha, not dZ/dC. And that is the correct approach. So without the exponents I should be computing the dZ/dS, and that gets easy again.

So, the real version of AdaBoost described in the third part is this:



Given: (x1, y1), ..., (xm, ym) where xi belongs to X, yi belongs to {-1, +1}.
Initialize: D1(i) = 1 for i = 1, ..., m.
For t = 1, ..., T:
  • Train the basic algorithm using the weights Dt.
  • Get weak hypothesis ht: X -> {-1, +1}.
  • Aim: select ht to minimalize the boundary on the training error Zt where:
    Wgoodt = 0; Wbadt = 0;
    for i = 1, ..., m {
         if ht(xi) = yi {
              Wgoodt += Dt(i)
         } else {
              Wbadt += Dt(i)
         }
    }
    Ct = Wgoodt / (Wgoodt + Wbadt)
    Zt = Wgoodt/Ct + Wbadt/(1-Ct)
    which can also be represented symmetrically through St:
    Zt = Wgoodt/St + Wbadt*St
    St = sqrt(Ct / (1-Ct)) = sqrt(Wgoodt / Wbadt)
    and substituting St:
    Zt = Wgoodt / sqrt(Wgoodt / Wbadt) + Wbadt * sqrt(Wgoodt / Wbadt)
    = 2 * sqrt(Wgoodt * Wbadt)
    Which gets minimalized when either of Wgoodt or Wbadt gets minimalized, but to be definitive we prefer to minimalize Wbadt.
  • Update,
    for i = 1, ..., m {
         if ht(xi) != yi; {
              Dt+1(i) = Dt(i) / (1-Ct)
         } else {
              Dt+1(i) = Dt(i) / Ct
         }
    }
Produce the function for computing the value of the final hypothesis:
H(x) {
     chance = 1;
     for t=1,...,T {
          if (ht(x) > 0) {
               chance *= Ct/(1-Ct);
          } else
               chance *= (1-Ct)/Ct;
          }
     }
     return sign(chance - 1)
}



Having this sorted out, I can move on to more creative versions of the algorithm where on a step of boosting the different training cases may get stamped with the different confidence values C.

Friday, June 24, 2016

AdaBoost 4 or the square root returns

I've had to return the library book on boosting. I've ordered my own copy, but on the last day I've paged through the more interesting chapters. This gave me some ideas on how the Bayesian model can be fit to those too, and I started writing it up, but I've got stopped at the question: what should get minimized? So I went back to the book (my own copy now) and I think that I finally understand it. Here is how it works:

The real measure that gets minimized on each round of boosting is Z. In the book the authors prove that it's not the training error as such but the upper bound on the training error. They write it (with dropped subscript t of the boosting round) as

Z = sum for all good cases i (Wi * exp(-Alpha)) + sum for all bad cases j (Wj * exp(Alpha))

Where W is the weight of a particular training case (same as D(i) in the classic AdaBoost terms, just since I've been using weights rather than a distribution, the letter W comes easier to my mind). They have a use for this exponent in the proofs but here it doesn't matter. Let's create a notation with a new variable S (the name doesn't mean anything, it's just a letter that hasn't been used yet) that will sweep the exponent under the carpet:

S = exp(Alpha)
Z = sum for all good cases i (Wi / S) + sum for all bad cases j (Wj * S)

S has a "physical meaning": it represents how the chances of a particular case change. The weights of the "good" cases (i.e. those that got predicted correctly on a particular round) get divided by S because we're trying to undo the effects of the changes from this round of boosting for the next round. Getting back to the Bayesian computation,

S = C / (1-C)

Where C is the confidence of the round's prediction acting on this case. When the produced boosting algorithm runs, if the round's prediction votes for this case (which we know got predicted by it correctly in training), its weight will be multiplied by C. If the prediction votes against this case, its weight will be multiplied by (1-C). Thus S measures how strongly this prediction can differentiate this case. For the cases that get mispredicted by this round, the relation is opposite, so their weights get multiplied by S instead of division.

The mathematical way to minimize Z is by finding the point(s) where its derivative is 0.

dZ/dS = sum for all good cases i (Wi / -S2) + sum for all bad cases j (Wj) = 0
sum for all good cases i (Wi / S2) = sum for all bad cases j (Wj)

And since for now we assume that S is the same for all the cases,

sum for all good cases i (Wi) = Wgood
sum for all bad cases j (Wj) = Wbad

then

Wgood / S2 = Wbad
Wgood / Wbad = S2
S = sqrt(Wgood / Wbad)

And since we know that the training error e is proportional to Wbad and (1-e) is proportional to Wgood:

e = Wbad / (Wgood + Wbad)
1-e = Wgood / (Wgood + Wbad)

then we can rewrite S as

S = sqrt( (1-e) / e )

Or substituting the expression of S through C,

C / (1-C) = sqrt( (1-e) / e )
C = sqrt(1-e)

Which means that the Bayesian confidence in AdaBoost really is measured as a square root of the "non-error". This square root can be put away in the basic case but it becomes important for the more complex variations.
[I think now that this part is not right, will update later after more thinking]

Returning to the minimization, this found value of S gets substituted into the original formula of Z, and this is what we aim to minimize on each round. For the classic AdaBoost it is:

Z = Wgood / sqrt((1-e) / e) + Wbad * sqrt((1-e) / e)

Since Wgood is proportional to (1-e) and Wbad is proportional to e (and in case if the sum of all the weights is 1, Wgood=1-e, and Wbad=e), we can substitute them:

Z = (1-e) / sqrt((1-e) / e) + e * sqrt((1-e) / e)
= sqrt(e * (1-e)) + sqrt(e * (1-e))
= 2 * sqrt(e * (1-e))

This value gets minimized when e gets closer to either 0 or 1 (because the formula is symmetric and automatically compensates for the bad predictions by "reverting" them). The classic AdaBoost algorithm then says "we'll rely on the ability of the underlying algorithm to do the same kind of reverting" and just picks the minimization of e towards 0. This makes the computation more efficient for a particular way of computation for C but the real formula is above. If we want to handle the more generic ways, we've got to start with this full formula and only then maybe simplify it for a particular case.

This formula can be also written in a more generic way, where each training case may have its own different value of confidence C and thus of S:

Z = sum for all good cases i(Wi / Si) + sum for all bad cases j(Wj * Sj)

And now I'll be able to talk about these more generic cases.

By the way, the difference of how the variation of AdaBoost for logistic regression works from how the classic AdaBoost works is in a different choice of measure for what gets minimized, a different formula for Z.