Monday, December 26, 2016

On testing

I've started writing another post (hope to complete it somewhere soon), and as one of the offshoots it brought my thoughts to the automated testing. The automated testing had been a great new thing, then not so new and kind of a humbug thing, then "instead of testing we'll just run a canary in production", then maybe it's doing a bit of a come-back. I think the fundamental reason for automated testing dropping from the great new thing is (besides the natural cycle of newer greater things) is that there are multiple ways to do it and a lot of these ways being wrong. Let me elaborate.

I think that the most important and most often missed point about the automated testing is this: Tests are a development tool. The right tests do increase the quality of the product but first and foremost they increase the productivity of the developers while doing so.

A popular analogy of the programs is that a program is like a house, and writing a program is like building a house. I'd say not, this analogy is wrong and misleading. A program is not like a house, it's like a drawing or a blueprint of a house. A house is an analogy of what a program produces.

The engineering drawings are not done much with the pencil and paper nowadays, they're usually done in a CAD system. A CAD system not only records the drawing but also allows to model the items on the drawing and test that they will perform adequately in reality before they're brought to reality. The automated tests are the CAD for programs.

The CADs for drawings require entering the extra information to be able to do their modeling. So do the automated tests. It's an overhead but if done right this overhead is an acceptable one, bringing more benefits than overhead.

The first obvious benefit of the automated tests is that they make the programs robust. To bring the house analogy again, like a house of bricks, not a house of cards. You might remember the saying about "If builders built buildings the way programmers wrote programs, then the first woodpecker that came along would destroy civilization". Not with the automated tests, not any more. With the automated tests you don't get into the situation "I've just changed one little thing and it all suddenly fell apart" any more.

But that's again just the reliability. Where is the productivity coming from? The major benefit for the productivity is that you don't need to do so much analysis any more. You can always change things quickly and see what happens in a test environment. This is also very useful for tracing the dependencies: if you change something and the product breaks, you see where it breaks. This is a major, major item that makes the programming much more tractable and turns the devilishly complicated modifications into the reasonably straightforward ones.

The programming tasks tend to come in two varieties: The first variety is the straightforward one: you just sit and write what the program needs to be doing, or even better copy a chunk of the existing code and modify it to do the new thing you need. The second variety happens when the thing you need to do hits the limitations of the infrastructure you have. It usually has some kind of a paradox in it, when you need to do this and that but if you do this then that falls apart and vice versa.

When you have this second kind of a task on your hands, you need to stretch your infrastructure to support both this and that at the same time. And this is the situation when things tend to get broken. I really like to solve such problems in two steps:

1. Stretch the infrastructure to support the new functionality but don't add the new functionality.
2. Add the new functionality.

At the end of step 1 the program works differently inside but in exactly the same way on the outside. Having the automated tests, they all still pass in exactly the same way as before. And only after the second step there will be the new functionality that will require the new tests and/or modification of the old tests.

But as I've said before, not all the tests are useful. I have formulated the following principles of the useful tests:

  • Easily discoverable.
  • Fast running. 
  • Fast starting.
  • Easy to run both by the developer and in an automated scheduler.
  • As much end-to-end as possible.
  • Test support built into the product and visible as an external API.

What do these things mean? Let's look in detail.

Easily discoverable means that the tests must live somewhere next to the code. If I change a line of code, I must be able to find the most relevant tests easily and run them. It's an easy mantra: changed something, run the tests on it. And doing so must be easy, even if the code is not familiar to the developer.

Fast running means that the tests should try not to take too much time. This comes partially to trying to make the tests fast and partially to the ability to run only the relevant subsets of the tests. The way of work here is: changed a few lines, run the basic tests, change a few more lines, run the tests again. The basic subset for some lines should be easy to specify and take seconds, or at worst a few minutes to run. This is sometimes quite complicated when testing the handling of the time-based events. For time-based events it's very good to have a concept of the application time that can be consistently accelerated at will. Another typical pitfall is the situation "I've requested to do this, how do I find out if it's done yet, before running the next step of the test?". This is something that is easy to resolve by adding the proper asynchronous notifications to the code, and making everyone's life easier along the way.  The fast running concept is also useful for debugging of the failed tests: how long does it take to re-run the test and reproduce the problem or verify the fix?

Fast starting is related to the fast running but is a bit different. It has to do with the initialization. The initialization should be fast. If you're running a large test suite of ten thousand tests, you can afford to have a lengthy initialization up front. If you run one small test, you can't have this lengthy initialization every time. If you really need this initialization, there must be a way to do it once and then repeatedly re-run the individual test.

The other aspect of the fast starting is that the programmer must be able to run the tests directly from the development environment. The repeating cycle is write-compile-test. The tests must directly and automatically consume the newly compiled code. There must not be any need to commit the code nor to run it through some kind of an official build.

This partially overlaps with the next requirement: the tests being easy to run both directly by the programmer and in an automated system that goes together with the automated official build. The need to run the tests directly comes from the need for the efficiency of development and from the use of the tests as a CAD. There shouldn't be any special hoops to jump through for including the tests into the official builds either, it should just work once checked in. This is another place where keeping the tests next to the code comes in: both the tests and the code must be a part of the same versioning system and must come in as a single commit. Well, not necessary literally as a single commit, personally I like to do a lot of small partial commits on my personal branch as I write the code and tests, but when it comes up into the official branch, the code of the program and of the tests must come together.

The end-to-end part is very important from both the standpoint of the cost of the tests and of their usefulness. It needs a good amount of elaboration. There had been much widespread love professed for the unit tests, and then also as much disillusionment. I think the reason for this is that the unit tests as they're often understood are basically crap. People try to test each function, and this tends to both require a lot of code and be fairly useless, as many functions just don't have enough substance to test. Mind you, there are exceptions: if a function does something complicated or is a part of a public API, it really should have some tests. But if a function is straightforward, there is no point in looking at it in isolation. Any issues will come up anyway as a part of a bigger test of the code that uses this function.

The worst possible thing is the use of the mocks: those result in just the tautologies when the same thing is written twice and compared. These "tests" test only that the function goes through the supposedly-right motions, not that these motions are actually right and produce the proper result. This produces many horrible failures on deployment to production. A real test must check that the result is correct. If some API you use changes under you and breaks the result, a proper test will detect it. If you really have to use mocks, you must use them at least one level down. I.e. if you're testing a function in API1 that calls API2 that calls API3, you might sometimes get something useful by mocking the functions in API3 but never mock the functions in API2.

Another benefit of the integrated testing is that it often turns up bugs where you didn't expect to. The real programs are full of complex interactions, and the more you exercise these interactions, the better are your tests.

So my concept of an unit test is an "integrated unit test": a test for one feature of an externally visible API. Note that there might be some complex internal components that should be seen as a layer of API and tested accordingly, especially if they are reused in multiple places. But the general rule is the same. This both increases the quality and cuts down on the amount of the tautology code.And it also facilitates the stretch-then-extend model I've described above: if a test tests what the API call does, not how it does that then you can change the underlying implementation without any change to the tests, and verify that all the tests still pass, your external API hasn't changed. This is a very important and fundamental ability of tests-as-CAD, if your tests don't have it then they are outright crap.

Some readers might now say: but doesn't that end-to-end approach contradict the requirement of the tests being fast? Doesn't this integration include the lengthy initialization? When done right, no, it doesn't. The answer comes two-fold: First, when the tests are more integrated, there are fewer of them, and due to the integration they cover more of the internal interactions. Second, this forces you to make the initialization not lengthy. It might require  a little extra work but it's worth it, and your customers will thank you for that.

This brings us to the last point: when you've built the support of the testing into your product, keep it there for the public releases and make it publicly available. This will make the life of your customers a lot easier, allowing them to easily build their own tests on top of your infrastructure. Instead of writing the stupid mocks, it's much better to have a side-door in the underlying APIs that would make them return the values you need to test your corner cases, and do it internally consistent with the rest of their state. I.e. if you mock some kind of an underlying error, how do you know that you mock it right, in the same way as the underlying API would manifest it? You don't. But if an underlying API has a way to ask it to simulate this error, it would simulate properly, as it does in the "real life".

Some people might now ask: But what about security? But what about performance?

As far as the security goes, security-through-obscurity is a bad idea anyway. Obviously, don't give the test access to the random entities, have a separate "access panel" in your API for the tests that would only allow the authorized connections. And if your product is a library then the security-through-obscurity is a REAL bad idea, and there are no random entities to start with. Make things accessible to your user. There is no good reason for a class to have any private members other than for the non-existing components. The most protection a class might ever need is protected. The same, there is no good reason for a class to have any final members. And if you're worried that someone will use the testing part of the API to do something that the main API doesn't allow then the solution is simple: make the main API allow it, otherwise you're crippling it intentionally.

As far as the performance goes, the overhead is usually pretty low. Note that you don't need to embed the test support all over the place, only at the substantial API layers. These API layers usually deal with the larger-scale concepts, not the ones you'd find at the bottom of a triple-nested loop. Sometimes there are exceptions to this rule but nothing that can't be resolved in some way, and the result after resolution is always better than before.

Hope that I've convinced you that the right testing infrastructure makes a world of difference in the software development, not only improves the quality but also makes it faster and cheaper. With the right tests you'll never get stuck with "it's working somehow, don't touch it" or "we will need to study for a year before making that change".  Or to use again the building analogy (yeah, I've just decried it as wrong but I'll use it differently), the tests are not like the scaffolding on a building that you discard after the construction is completed, they are like the maintenance passages and hatches that keep the building in an inhabitable condition throughout its life.

Saturday, August 27, 2016

AdaBoost 9: Boost by majority afterthought

After some time I've realized that all this monkeying with the conditional probabilities in the Bayesian table is not necessary. You can just throw away a whole training case or a part of it and continue like nothing happened, the probabilities should stay consistent anyway. After all, the point of adjusting the weights after each round opposite to how the run-time weights would be changed is to give each training case an equal chance. But if we don't want to give some training case an equal chance then there is no point in treating it equally, an ignored training case can be simply ignored.

Another thought is that it looks like the two-pass approach can be used to find what training cases to throw away in a dynamic way. We can do it by splitting the set of available cases randomly in half. Then use one half for the first pass of training of N rounds and remember the weights throughout it. Then test the effectiveness of this training on the second half of the cases. But not just run the N rounds in the test. Instead, keep the test results for using only 1 round, 2 rounds, 3 rounds, and so on all the way to the N rounds. Then see the number of rounds on which the test did best, say K rounds. Going back to the training weights, we can find, what training cases were not getting guessed well at K rounds. We can mark them as outliers. Then repeat the same thing swapping the two halves, and find the outliers in the second half. Then throw away the outliers and do the second pass of training on the rest of the cases.

Saturday, August 20, 2016

AdaBoost 8: Boost by majority

When I wrote before

The premise of boosting is that we're able to find a number of methods (what they call "hypotheses" in AdaBoost) to predict the correct outcomes of the training cases, each method correctly predicting more than 50% of the training cases. Then if we collect a large-ish number of these methods, we can predict the correct outcomes of all the training cases simply by averaging the predictions of these methods. And the other cases will follow the training cases (unless an overfitting happens). Since more than 50% of the cases are correctly predicted by each method, after the averaging more than 50% of the votes for each training case will be correct, and thus the result will be correct too. Of course this depends on the correct predictions being distributed pretty evenly among the cases. If we have a thousand methods that predict correctly the same cases and incorrectly the other cases, obviously after averaging these other cases will still be predicted incorrectly. So the selection of methods must somehow shuffle the preference for the cases, so that the next picked method will predict well the cases that have been predicted poorly by the previous picked methods. That's it, that's the whole basic idea. It's a bit of an oversimplification but it's easy to understand.

I really did mean it as an oversimplification, since AdaBoost uses the Bayesian decisions to do much better than the simple majority counting. Little did I know that there actually is the method of Boost By Majority (BBM) that does just the counting. It has some other differences but more about that later.

The simple averaging can be simulated through the Bayesian means too. Just use the same confidence for each event. Incidentally, that's what the NonAdaBoost algorithm, also known as epsilon-Boost does: it looks for the weak hypotheses that have at least a fixed "edge" gamma (i.e. the probability of the right guess being at least 0.5+gamma) and then always sets the confidence C=0.5+gamma, and uses the same value to adjust the weights of the training cases.

The NonAdaBoost is essentially a version of AdaBoost with a fixed confidence, and suffering from this defect. But Boost By Majority has another big difference: the way it adjusts the weight of the training cases. The formula it uses is pretty complicated, so I won't even try to reproduce it here. But here is the gist: it keeps track of how many rounds are left to the end of the boosting and what is the balance of the votes collected by each training case. If the modulo of the balance is higher than the number of rounds left, it means that the fate of this case can't be changed any more: it's either guaranteed to be guessed right if the balance is positive or guaranteed to be guessed wrong if the balance is negative, so the algorithm gives up on these cases. It gives the most weight to the most undecided cases, and spreads the rest of the weights in the bell shape of the binomial distribution. The result is that unlike AdaBoost and NonAdaBoost, BBM doesn't concentrate on the hardest cases that are likely to be the noise in the data anyway, and thus reduces the overfitting.

The last chapter of the book is about a combination of AdaBoost and BBM called BrownBoost (from the Brownian motion), or also "boosting in continuous time". It starts with the idea that if the returned partial hypothesis has a higher edge than minimally needed, it might still have enough edge after the re-weighting the training cases, then it can be directly reused on the next round too without searching for a new one, and so on until its edge wears off. This gets developed into an algorithm that uses a real number in the range [0,1) instead of the round count, with the actual rounds moving the current point on it by the varying amounts. The speed of the movement is determined by the pre-set desired training error. This training error gets reached when the end of the range is reached. If the target error is set to 0, the algorithm behaves in the same way as AdaBoost.

The downside is that the algorithm is complex, there isn't even a formula for determining the confidence values for each partial hypothesis. Instead you get a system of two equations that connect this confidence value and advancement through the range to be solved numerically. In the great scheme of things it's not a big deal, after all, compared to the finding of the partial hypotheses this shouldn't be such a big overhead. But it's not easy to think of. And the proof of the validity of this algorithm is quite complicated.

I can't help thinking of a couple of simpler ideas.

The first idea, or should I say guess, is that when we do AdaBoost, it fits into a Bayesian model. So if we keep this Bayesian model consistent, the boosting should still be valid. Obviously, I have no proper proof of that but it looks like a reasonable assumption. There is an easy way to take some training case (or a part of its weight) out of rotation and still keep the Bayesian model consistent.

Remember, previously I've described that we start with a Bayesian table that contains each individual training case

CaseId Weight Outcome  Ev1 ... EvT
1       1 *    true     1   ... 1
M       1 *    false    0   ... 0

Which then gets conflated into a two-line table, all the cases with the true outcome combined into one line, and the false ones into another line. The conditional probabilities in the table get averaged during conflation but since it's an averaging of all ones (or all zeroes), the result is still one (or zero).

Weight Outcome  Ev1 ... EvT
1 *    true     1   ... 1
1 *    false    0   ... 0

To take a training case out of rotation, we just change its conditional probability in all the following events (that match the AdaBoost's partial hypotheses) to 0.5. This says that no future events will affect it. And accordingly we set the AdaBoost weight of it to 0. Such decisions can be made according to any algorithm, matching any desired curve.

For example, suppose that we decide to take a case out of rotation when its weight relative to the sum of all weights reaches 0.1 (this is probably not a very good rule, since it allows at most 10 cases to be excluded, but simple for the sake of demonstration). Suppose that its a case with the true ("1") outcome. And suppose that all the weights of all the true cases are totaling to the same value as of all the false cases, each of them having the relative weight of 0.5 (not very likely in reality but just as good a number as any other).

After the disablement, the conditional probability of the true cases will become ((0.5*0.1) + (1 * 0.4))/0.5 = 0.9.

Weight Outcome  Ev1 ... EvN-1 EvN ...
1 *    true     1   ... 1     0.9 ...
1 *    false    0   ... 0     0   ...

Once a training case gets disabled, it stays disabled for all the future rounds, and the AdaBoost keeps acting only on the weights of those training cases that still have 1 or 0 for the conditional probability. Obviously, the more you disable, the less will be the effect of the following rounds when the Bayesian computation runs.

Interestingly though the edges of the partial hypotheses will be higher. Remember, the training cases that get thrown away get it for being difficult to predict. So suppose the partial hypothesis EvN would have returned the confidence of 0.8 if that case wasn't thrown away and had guessed that case wrong. When we throw away the stubborn case, that would become 0.8 out of former 0.9, so the confidence becomes 0.8/0.9 = 0.89, an improvement!

However all this throwing-away has no effect on the previous rounds, these are already set in stone. Which begs an easy solution which is my second idea: why not do two passes of AdaBoost? After the first pass look at the final weights of the training cases to determine the most stubborn ones. Throw them away and do the second pass from scratch. After all, BrownBoost requires an adjustment of the target training error which gets done by running it multiple times with the different values and then selecting the best one. Doing two passes of AdaBoost isn't any worse than that.

Saturday, August 6, 2016

Bayes 23: enumerated events revisited

Another idea that I've glimpsed from the book on boosting is the handling of the enumerated events, previously described in the part 19. The part 6 of my notes on boosting describes how the decision stumps can be treated as only "half-stumps": actionable if the answer is "yes" and non-actionable if the answer is "no" (or vice versa). This is actually the same thing as I complained of before as the mistreatment of the Bayesian formula where the negative answer is treated the same as the lack of answer. But taken in the right context, it makes sense.

If we take a complementary pair of such half-symptoms (asking the same question, and one of them equaling the negative answers with "don't know", another one equaling the positive answer with "don't know"), their combined effect on the probability of the hypotheses will be exactly the same as of one full symptom. In the weight-based model, the weights of the hypotheses after the complementary pair will be only half of those after one full symptom but they will all be scaled proportionally, making no difference. Alternatively, if we equal the negative answers not with "don't know" but with "irrelevant", even the weights will stay the same.

The interesting thing is that these half-symptoms can be straightforwardly extended to the multiple-choice questions. Each choice can be equaled with one half-symptom. So if the answer to this choice is "yes" then it takes effect, if "no" then it gets skipped. In the end exactly one choice takes effect. Or potentially the answer can also be "none of the above" and then this symptom will be simply skipped. It should also be relatively straightforward to accommodate the answers like "it's probably one of these two", taking both answers at half-weight. I didn't work through the exact formulas yet but I think it shouldn't be difficult.

The approach of taking the answer at a partial weight also provides a possible answer to "should we treat this problem as model-specific or generic?": it allows to mix both together, taking say the model-specific approach at the weight 0.99 and the generic at 0.01. Then if the model-specific approach finds a matching hypothesis, great, if not then the answer found with the generic approach will outweigh it. This weight of the generic approach should be higher than the confidence cap of the "can't happen" answer: the generic weight of 0.01 would probably work decently well together with the capping probability of 0.001.

Bayes 22: overfitting revisited

Previously I've been saying that I didn't experience overfitting in the Bayesian models, and pretty much discounted it. Now I've read a model of overfitting in the book on AdaBoost, and I understand why. Here is the gist, with some of my thoughts included.

The overfitting happens when the model starts picking the peculiarities of the particular training set rather than the general properties. It's down to the noise in the data: if the data contains random noise, only the cases without the noise can be predicted well on the general principles, and the noisy ones are bound to be mispredicted. The training data also contains noise. Since the noise is random, the noise in the test data (an in the future real-world cases) won't follow the noise in the training data closely. If the model starts following the noise in the training data too closely, it will mispredict the well-behaved cases in the test data, in addition to the noisy test cases. For all I can tell, this means that the overfitting magnifies the noise in the quadratic proportion, with probabilities:

P(good prediction) = P(no noise in the training data) * P(no noise in the test data)

If the model makes the decisions based on the "yes-no" questions (AKA binary symptoms), picking up the general trends takes a relatively small number of yes-no questions, because their effects are systematic. The effects of the noise are random, so each noisy training case is likely to require at least one extra yes-no question to tell it apart. If there is a substantial number of noisy cases in the training data, a lot of extra questions would be needed to tell them all apart. So the rule of thumb is, if there are few questions in the model compared to the number of the training cases, not much overfitting will happen.

In the models I was working with, there were tens of thousands of the training cases and only hundreds of symptoms. So there wasn't such a big chance of overfitting in general. Even if you say "but we should count the symptoms per outcome", there still were only low hundreds of outcomes, and if we multiply 100 symptoms by 100 outcomes, it's still only 10K decision points in the table, the same order of magnitude as the number of the training cases.

There also was very little noise as such in the data I've dealt with. If you do diagnostics, you get the natural testing: if the fix doesn't work, the client will come back. There is of course the problem of whether you've changed too many parts. It can be controlled to a degree by looking for training only at the cases where the fix was done at the first attempt. Though you still can't get the complete confidence for the cases where more than one part was changed. And of course if you don't look at the cases that required multiple attempts, it means that you're not learning to diagnose the more difficult cases.

But there was a particular kind of noise even in this kind of fairly clean data: the noise of multiple problems occurring or not occurring together in various combinations. If the model is sensitive to whether it had seen a particular combination or not, the randomness of the combinations means that they represent a random noise. And I've spent quite a bit of effort on reducing this dependence in the logic and on reducing this noise by preprocessing the training data. Which all amounts to reducing the overfitting. So I was wrong, there was an overfitting, just I didn't recognize it.

Actually, I think this can be used as a demonstration of the relation between the number of symptoms and amount of overfitting. If we're looking to pick only one correct outcome, the number of questions is just the number of questions, which was in hundreds for me. Much lower than the number of the training cases, and very little overfitting had a chance to happen. Yes, there were hundreds of possible outcomes but only one of them gets picked, and the number of questions that are used is the number of questions that affect it. But if we're looking at picking correctly all the outcomes, the number of questions gets multiplied by the number of outcomes. In the system I worked on, the total was comparable to the number of training cases, and the overfitting became noticeable. It would probably become even worse if the Bayesian table contained the rows not just for the outcomes but for the different ways to achieve these outcomes, like I've described in this series of posts. So with extra complexity you win on precision but the same precision magnifies the effects of overfitting. The sweet spot should be somewhere in the middle and depend a lot on the amount of noise in the data.

AdaBoost 7: multi-class & unseen combinations

The basic idea behind the multi-class (and also multi-label, i.e. where each case may have more than one outcome) AdaBoost can be described as boosting the recognition of all the outcomes in parallel. It takes the "yes or no" dichotomies for all the outcomes, and on each round it tries to find such a partial hypothesis where the sum of Z for all of them is minimal. This is very similar to what was described in the part 6, where multiple ranges were added, each with its own confidence. The difference is in the formula for the final computation: in the multi-class version there is a separate formula for each class that uses only the confidence values that all the training rounds computed for this particular class.

There is also a possible optimization for the situation where there may be only one outcome per case (i.e. single-label), saying that the mapping between the dichotomies used in the logic above and the outcomes doesn't have to be one-to-one. Instead each outcomes can be mapped to a unique combination (or multiple combinations) of the dichotomies. The dichotomies can be selected on some smart way, say if we're trying to recognize the handwritten digits, they can be "pointy vs roundy", "a loop on the bottom vs no loop at the bottom" etc. Or just by just dividing the outcomes in half in some blind way, like "0,1,2,3,4 vs 5,6,7,8,9", "0,2,4,6,8 vs 1,3,5,7,9" etc.

Returning to the multi-label situation, one of the problems I've experienced with it is the ability to recognize the combinations of outcomes that weren't present in the training data. That is, the outcomes were present but none of the training cases had exactly this combination. For the basic diagnostics, this can be discounted by saying "but what's the percentage of such cases" but when you start pushing the quality of diagnosis around 95%, it turns out that great many of the remaining misdiagnosed cases fall into this category.

AdaBoost doesn't have any built-in solution for this problem. The solution it produces is only as good as the underlying algorithm. There is nothing in AdaBoost that puts the pressure on the underlying algorithm to recognize the combinations that aren't present in the training data. If the underlying algorithm can do it anyway (and perhaps despite the pressure from AdaBoost), the resulting combined formula will be able to do it too. If it can't then the combined formula won't either. The simple algorithms like the decision stumps can't.

But maybe some multi-pass schemes can be devised. Run the boosting once, get a set of the candidate symptoms (i.e. partial hypotheses). Use these symptoms on the training cases to try to differentiate, which symptom is related to which outcome. Then run the boosting the second time from scratch, only this time with the relevance knowledge mixed in: whenever a symptom that is close to an irrelevant one is tested on a training case, make it return "I don't know", i.e. the confidence 0.5. This will shift the choice of symptoms. Obviously, if using the resulting formula, the same pruning of irrelevance has to be applied there in the same way. The symptoms from the second pass can be re-tested for relevance, and if any change is found, the third pass can be made, and so on.

Or even better, perhaps this logic can be merged directly into each round of the underlying algorithm in one pass of AdaBoost: when a candidate partial hypothesis (i.e. symptom) is found, measure its relevance right there and change its Z-value accordingly. Pick the candidate that has the lowest Z-value even after it has been corrected for the relevance. Include the relevance information into the partial hypothesis.

Tuesday, August 2, 2016

Bayes 21: mutual exclusivity and independence revisited

I've been thinking about the use of AdaBoost on the multi-class problems, and I've accidentally realized what is going on when I've used the combination of the mutually exclusive and of independent computation of hypotheses, as described in the part 10.

Basically, the question boils down to the following: if the mutually exclusive computation shows that multiple hypotheses have risen to the top in about equal proportions (and consequently every one of them getting the probability below 0.5), how could it happen that in the independent computation each one of them would be above 0.5? If the training data had each case resulting in only one true hypothesis, the probabilities computed both ways would be exactly the same, in the independent version the probability of ~H being exactly equal to the sum of probabilities of the other hypotheses.

The answer lies in the training cases where multiple hypotheses were true. If we use the weight-based approach, it becomes easy to see that if a new case matches such a training case, it would bring all the hypotheses in this case to the top simultaneously. So the independent approach simply emphasizes the handling of these training cases. Equivalently, such training cases can be labeled with pseudo-hypotheses, and then the weight of these pseudo-hypotheses be added to the "pure" hypotheses. For example, let's consider re-labeling of the example from the part 10:

# tab09_01.txt and tab10_01.txt
evA evB evC
1 * hyp1,hyp2 1 1 0
1 * hyp2,hyp3 0 1 1
1 * hyp1,hyp3 1 0 1

Let's relabel it as:

evA evB evC
1 * hyp12 1 1 0
1 * hyp23 0 1 1
1 * hyp13 1 0 1

Then the probabilities of the original hypotheses can then be postprocessed as:

P(hyp1) = P(hyp12)+P(hyp13)
P(hyp2) = P(hyp12)+P(hyp23)
P(hyp3) = P(hyp23)+P(hyp13)

And this would give the same result as the independent computations for every hypothesis.

So this approach works well for when the combined set of symptoms for multiple hypotheses had been seen in training, and not much good for the combinations that haven't been seen in the training. The combined use of the independent and mutually-exclusive combinations with a low acceptance threshold for the independent computations tempers this effect only slightly.

Saturday, July 2, 2016

AdaBoost 6: multiple confidence ranges & other thoughts

One of the chapters I quickly read through was on using a better estimation of probability of the success of the partial hypotheses (in the AdaBoost sense), with the following example:

Suppose, the underlying algorithm found a decision stump, and finds that it can classify the cases having a value above some limit pretty well (say, 90% of them are "+1") while the cases with the values under this limit are undifferentiated (say, 49% of them are "+1"). In this case if we allow the underlying algorithm to return the values not only +1 and -1 but also 0 for when it can't differentiate well, the boosting goes much better.

Since the part 3 I've been wondering if the algorithm can be improved in an obvious way by using the separate confidence values for the situations when the underlying algorithm returns +1 or -1 (or in equivalent Bayesian terms, 1 or 0), and maybe even do better than that by forming multiple "bands" of training cases by some rule and assigning a separate confidence values to each band, and maybe even further by computing an individual confidence value for each case. The idea above shows that it can, and the book then goes on to the exact same idea of multiple bands, each with its own confidence value, and on to the partial hypotheses returning the individual confidence values for each training case. So that idea really was obvious, and there is not much point in writing more about it. The only tricky part is the minimalization criteria, but that becomes straightforward from the description in the part 4.

One more idea would be to make the decision stumps not discrete ("1" on this side of this value, "0" on the other side) but graduated, with the confidence being 0.5 at the value and growing towards 1 on one side and towards 0 on the other side. Possibly saturating at some distance from the center point value.

An easy example of saturation would be this: If we're classifying a set of points in a 2-dimensional space with the decision stumps based on the coordinate values, this means that on each step we find some value of a coordinate (X or Y) and say that the points on one side have mostly the value "1" and on the other side they have mostly the value "0". Then the classic AdaBoost approach (without the square root in the confidence) takes the confidence C as the fraction of the points whose values got guessed right. Or, translating from the pair (value, confidence) to the just the confidence, the pair (1, C) becomes C, and the pair (0, C) becomes (1-C). But the breaking point is usually not one point, it's a range between the coordinate values closest to this break. The classic approach is to pick the centerpoint of this range as the breaking point. But we could say that the confidence changes linearly between these two coordinate values and beyond this range saturates to C and (1-C). This smooth transition could help with the training on the data with errors in it. The book contains an example of training on the data that contained 20% of errors which led to overfitting over many rounds of boosting. The smooth transitions could provide the averaging that would counteract this overfitting. Maybe.

Another thought that occurred to me is what if we're boosting while we're boosting? Or in other words, what if the partial algorithm runs a few rounds of boosting on its own? How can it be equivalently represented as a single round of boosting?

This looks like a good example where an individual value of the confidence for each training case would be handy. As we go through the execution stage, the weight of each example would be multiplied on each round of "nested boosting" in the same way as if would be on each round of normal boosting. I.e. if a round of boosting had the confidence C, and guessed this particular training case right, the weight of this training case will be multiplied by C, and if it guessed this training case wrong, the weight would be multiplied by (1-C).

So if we define the effective confidence Ce as:

(if the round guessed right, its Ce=C, otherwise its Ce=1-C)

the we can see that the multiplier will be:

product for rounds t(Cet)

Except that it's not properly scaled to be in the range [0..1]. To adjust the scaling, it has to become

product for rounds t(Cet) / (product for rounds t(Cet) + product for rounds t(1-Cet))

There won't be a single confidence value for all the training cases of such a composite round, each training case will have its own confidence value, with the probability of pointing towards 0 or 1 built into it.

One more interesting thing I've read is that after some number of rounds AdaBoost tends to start going in a circle through the same set of distributions (or sometimes not quite the same but very close). To me this looks like an indication that it's time to stop. Boosting the margins by going through this loop repeatedly looks like cheating, because if we look at its Bayesian meaning, this meaning implies that the events that get examined must be essentially independent of each other. But if we repeatedly examine the same events, they're not independent, they're extra copies of the events that have already been seen. Thus re-applying them repeatedly doesn't seem right. On the other hand, the events in such a loop do form this loop because they represent each event in a balanced way. So maybe the right thing to do is to throw away everything before this loop too, and just leave one iteration of this loop as the only events worth examining. This would be interesting to test if/when I get around to do it.

Tuesday, June 28, 2016

AdaBoost 5: the square root and the confidence value

The question of what should be used as the confidence value for Bayesian computations is a thorny one. In my previous take on it I've used the value without a square root, while the classic AdaBoost formula uses the one with the square root. Which one is more right?

Both of them produce the same decision for the classification of the cases. The difference is in the absolute value of the chances, or in the AdaBoost terms, the margin (with the classic AdaBoost also applying the logarithm on top).

I still think that using the value without the square root has the more correct "physical meaning". But either can be used with the same result.

Following the classic AdaBoost, the version of the confidence with the square root follows from the expression

Z = sum for all good cases i (Wi / S) + sum for all bad cases j (Wj * S)
Z = sum for all good cases i (Wi * (C-1) / C) + sum for all bad cases j (Wj * C / (C-1))

Just like the last post, W is the weight of a particular training case, same as D(i) in the classic AdaBoost terms, just since I've been using weights rather than a distribution, the letter W comes easier to my mind.

The "physical meaning" here is that the weights of the training cases for the next round of boosting are adjusted opposite to how the chances of these training cases get affected by this round during the final computation. If a training case gets predicted correctly by a partial hypothesis, its weight will get multiplied by C and the weight of the other cases will be multiplied by (1-C), so the changes will get multiplied by C/(1-C), and for the next round of boosting the good cases get divided by that to compensate. For the cases that get predicted incorrectly, the opposite happens.

This translates to

C / (1-C) = S = sqrt((1-e) / e) = sqrt(1-e) / sqrt(e)

The last expression does NOT mean

C = sqrt(1-e)

Instead, it means

C = sqrt(1-e) / (sqrt(1-e) + sqrt(e))

Alternatively, the approach without the square root starts with the premise

Z = sum for all good cases i (Wi / C) + sum for all bad cases j (Wj / (1-C))

The "physical meaning" is as described before, the weights of the training cases for the next round of boosting are adjusted opposite to how the weights of these training cases get affected by this round during the final computation. It seems to me that compensating the weights for the changes in weights is the more correct "physical meaning" than compensating the weights for the changes in chances.

This translates to

C / (1-C) = S2 = sqrt( (1-e) / e )2 = (1-e) / e


C = (1-e)

By the way, chasing this version through the derivatives as shown in the previous post was interesting. I've appreciated why the authors of AdaBoost involved the exponents into the formulas: doing the derivatives and integrals with the exponents is much easier than without them. Then I've realized that with the derivatives where S = exp(Alpha) I'm computing dZ/dAlpha, not dZ/dC. And that is the correct approach. So without the exponents I should be computing the dZ/dS, and that gets easy again.

So, the real version of AdaBoost described in the third part is this:

Given: (x1, y1), ..., (xm, ym) where xi belongs to X, yi belongs to {-1, +1}.
Initialize: D1(i) = 1 for i = 1, ..., m.
For t = 1, ..., T:
  • Train the basic algorithm using the weights Dt.
  • Get weak hypothesis ht: X -> {-1, +1}.
  • Aim: select ht to minimalize the boundary on the training error Zt where:
    Wgoodt = 0; Wbadt = 0;
    for i = 1, ..., m {
         if ht(xi) = yi {
              Wgoodt += Dt(i)
         } else {
              Wbadt += Dt(i)
    Ct = Wgoodt / (Wgoodt + Wbadt)
    Zt = Wgoodt/Ct + Wbadt/(1-Ct)
    which can also be represented symmetrically through St:
    Zt = Wgoodt/St + Wbadt*St
    St = sqrt(Ct / (1-Ct)) = sqrt(Wgoodt / Wbadt)
    and substituting St:
    Zt = Wgoodt / sqrt(Wgoodt / Wbadt) + Wbadt * sqrt(Wgoodt / Wbadt)
    = 2 * sqrt(Wgoodt * Wbadt)
    Which gets minimalized when either of Wgoodt or Wbadt gets minimalized, but to be definitive we prefer to minimalize Wbadt.
  • Update,
    for i = 1, ..., m {
         if ht(xi) != yi; {
              Dt+1(i) = Dt(i) / (1-Ct)
         } else {
              Dt+1(i) = Dt(i) / Ct
Produce the function for computing the value of the final hypothesis:
H(x) {
     chance = 1;
     for t=1,...,T {
          if (ht(x) > 0) {
               chance *= Ct/(1-Ct);
          } else
               chance *= (1-Ct)/Ct;
     return sign(chance - 1)

Having this sorted out, I can move on to more creative versions of the algorithm where on a step of boosting the different training cases may get stamped with the different confidence values C.

Friday, June 24, 2016

AdaBoost 4 or the square root returns

I've had to return the library book on boosting. I've ordered my own copy, but on the last day I've paged through the more interesting chapters. This gave me some ideas on how the Bayesian model can be fit to those too, and I started writing it up, but I've got stopped at the question: what should get minimized? So I went back to the book (my own copy now) and I think that I finally understand it. Here is how it works:

The real measure that gets minimized on each round of boosting is Z. In the book the authors prove that it's not the training error as such but the upper bound on the training error. They write it (with dropped subscript t of the boosting round) as

Z = sum for all good cases i (Wi * exp(-Alpha)) + sum for all bad cases j (Wj * exp(Alpha))

Where W is the weight of a particular training case (same as D(i) in the classic AdaBoost terms, just since I've been using weights rather than a distribution, the letter W comes easier to my mind). They have a use for this exponent in the proofs but here it doesn't matter. Let's create a notation with a new variable S (the name doesn't mean anything, it's just a letter that hasn't been used yet) that will sweep the exponent under the carpet:

S = exp(Alpha)
Z = sum for all good cases i (Wi / S) + sum for all bad cases j (Wj * S)

S has a "physical meaning": it represents how the chances of a particular case change. The weights of the "good" cases (i.e. those that got predicted correctly on a particular round) get divided by S because we're trying to undo the effects of the changes from this round of boosting for the next round. Getting back to the Bayesian computation,

S = C / (1-C)

Where C is the confidence of the round's prediction acting on this case. When the produced boosting algorithm runs, if the round's prediction votes for this case (which we know got predicted by it correctly in training), its weight will be multiplied by C. If the prediction votes against this case, its weight will be multiplied by (1-C). Thus S measures how strongly this prediction can differentiate this case. For the cases that get mispredicted by this round, the relation is opposite, so their weights get multiplied by S instead of division.

The mathematical way to minimize Z is by finding the point(s) where its derivative is 0.

dZ/dS = sum for all good cases i (Wi / -S2) + sum for all bad cases j (Wj) = 0
sum for all good cases i (Wi / S2) = sum for all bad cases j (Wj)

And since for now we assume that S is the same for all the cases,

sum for all good cases i (Wi) = Wgood
sum for all bad cases j (Wj) = Wbad


Wgood / S2 = Wbad
Wgood / Wbad = S2
S = sqrt(Wgood / Wbad)

And since we know that the training error e is proportional to Wbad and (1-e) is proportional to Wgood:

e = Wbad / (Wgood + Wbad)
1-e = Wgood / (Wgood + Wbad)

then we can rewrite S as

S = sqrt( (1-e) / e )

Or substituting the expression of S through C,

C / (1-C) = sqrt( (1-e) / e )
C = sqrt(1-e)

Which means that the Bayesian confidence in AdaBoost really is measured as a square root of the "non-error". This square root can be put away in the basic case but it becomes important for the more complex variations.
[I think now that this part is not right, will update later after more thinking]

Returning to the minimization, this found value of S gets substituted into the original formula of Z, and this is what we aim to minimize on each round. For the classic AdaBoost it is:

Z = Wgood / sqrt((1-e) / e) + Wbad * sqrt((1-e) / e)

Since Wgood is proportional to (1-e) and Wbad is proportional to e (and in case if the sum of all the weights is 1, Wgood=1-e, and Wbad=e), we can substitute them:

Z = (1-e) / sqrt((1-e) / e) + e * sqrt((1-e) / e)
= sqrt(e * (1-e)) + sqrt(e * (1-e))
= 2 * sqrt(e * (1-e))

This value gets minimized when e gets closer to either 0 or 1 (because the formula is symmetric and automatically compensates for the bad predictions by "reverting" them). The classic AdaBoost algorithm then says "we'll rely on the ability of the underlying algorithm to do the same kind of reverting" and just picks the minimization of e towards 0. This makes the computation more efficient for a particular way of computation for C but the real formula is above. If we want to handle the more generic ways, we've got to start with this full formula and only then maybe simplify it for a particular case.

This formula can be also written in a more generic way, where each training case may have its own different value of confidence C and thus of S:

Z = sum for all good cases i(Wi / Si) + sum for all bad cases j(Wj * Sj)

And now I'll be able to talk about these more generic cases.

By the way, the difference of how the variation of AdaBoost for logistic regression works from how the classic AdaBoost works is in a different choice of measure for what gets minimized, a different formula for Z.

Sunday, May 29, 2016

AdaBoost 3 and Bayesian logic

I think I've figured out a way to express the workings of AdaBoost in terms of the Bayesian processing, and I think it becomes simpler this way. Now, I'm not the first one to do that, the book "Boosting" describes a way to do this with what it calls the logistical regression. It also derives an approach to probabilities in AdaBoost from this way. But the logistical regression uses a sigmoid curve (I think like the one generally used for the normal distribution) and differs from the plain AdaBoost. I think my explanation works for the plain AdaBoost.

Before I start on that explanation, I want to talk about why the boosting works. Yeah, I've followed the proofs (more or less) but it took me a while to get a bit of an intuitive feeling in my head, the "physical meaning" of the formulas. I want to share this understanding which is much simpler than these proofs (and I hope that it's correct):

The premise of boosting is that we're able to find a number of methods (what they call "hypotheses" in AdaBoost) to predict the correct outcomes of the training cases, each method correctly predicting more than 50% of the training cases. Then if we collect a large-ish number of these methods, we can predict the correct outcomes of all the training cases simply by averaging the predictions of these methods. And the other cases will follow the training cases (unless an overfitting happens). Since more than 50% of the cases are correctly predicted by each method, after the averaging more than 50% of the votes for each training case will be correct, and thus the result will be correct too. Of course this depends on the correct predictions being distributed pretty evenly among the cases. If we have a thousand methods that predict correctly the same cases and incorrectly the other cases, obviously after averaging these other cases will still be predicted incorrectly. So the selection of methods must somehow shuffle the preference for the cases, so that the next picked method will predict well the cases that have been predicted poorly by the previous picked methods. That's it, that's the whole basic idea. It's a bit of an oversimplification but it's easy to understand.

Now on to the explanation of the connection between AdaBoost and the Bayesian logic.

To start with, I want to show one more rewrite of the AdaBoost algorithm. It builds further on the version I've shown in the last installment. Now I'm returning back to the notation of et and (1-et). These values are proportional to Wbadt and Wgoodt but are confined to the range [0, 1] which fits well into the Bayesian computations. More importantly, this new version gets rid of the square root in the computation of Dt+1(i). It wasn't the first thing I realized for the connection between the AdaBoost and Bayesian logic, it was actually the last thing I realized, but after that all the parts of the puzzle fell into place. So I want to show this key piece of the puzzle first.

The point of this computation is to readjust the weights of the training cases for the next round, so that the total weight of all the cases successfully predicted by the current round equals to the total weight of the unsuccessfully predicted rounds. There is more than one way to make the weights satisfy this condition, they're all proportional to each other and all just as good. They way the weight are modified in the classic for of AdaBoost algorithm has been selected to make the modification symmetric, and allow it to write the adjustment for both correct and incorrect cases as a single formula:

Dt+1(i) = Dt(i)*exp(-Alphat*yi*ht(xi)) / Zt

But if we're writing an honest if/else statement, it doesn't have to be symmetric. We can as well write either of:

     if ht(xi) != yi; {
          Dt+1(i) = Dt(i) / et
     } else {
          Dt+1(i) = Dt(i) / (1-et)

or with a degenerate "else" part:

     if ht(xi) != yi; {
          Dt+1(i) = Dt(i) * (1-et) / et
     } else {
          Dt+1(i) = Dt(i)

Either way the result is the same. And there is no need to involve the square roots, the formula becomes simpler. After making this simplification, here is my latest and simplest version of the algorithm:

Given: (x1, y1), ..., (xm, ym) where xi belongs to X, yi belongs to {-1, +1}.
Initialize: D1(i) = 1 for i = 1, ..., m.
For t = 1, ..., T:
  • Train the basic algorithm using the weights Dt.
  • Get weak hypothesis ht: X -> {-1, +1}.
  • Aim: select ht to minimalize the weighted error et:
    Wgoodt = 0; Wbadt = 0;
    for i = 1, ..., m {
         if ht(xi) = yi {
              Wgoodt += Dt(i)
         } else {
              Wbadt += Dt(i)
    et = Wbadt / (Wgoodt + Wbadt)
  • Update,
    for i = 1, ..., m {
         if ht(xi) != yi; {
              Dt+1(i) = Dt(i) * (1-et) / et
         } else {
              Dt+1(i) = Dt(i)
Produce the function for computing the value of the final hypothesis:
H(x) {
     chance = 1;
     for t=1,...,T {
          if (ht(x) > 0) {
               chance *= (1-et)/et;
          } else
               chance *= et/(1-et);
     return sign(chance - 1)

Now this form can be translated to the Bayesian approach. I'll be using the form of Bayesian computations that works with weights rather than probabilities. The translation to his form is fairly straightforward:

There are two Bayesian mutually-exclusive hypotheses (but to avoid confusion with the AdaBoost term "hypothesis", let's call them "outcomes" here): one that the result is true (1), another one that the result false (0).

Each "hypothesis" ht(x) in AdaBoost terms becomes an "event" Evt in the Bayesian form (to avoid the terminological confusion I'll call it an event henceforth). Each event is positive: it being true predicts that the true outcome is correct, and vice versa. The training table looks like this:

Weight Outcome  Ev1 ... EvT
1 *    true     1   ... 1
1 *    false    0   ... 0

Aside from this, we remember et from all the boosting rounds, and of course the computations for all the "events" chosen by the boosting.

Then when we run the model, we get the set of arguments x as the input, and start computing the values of the events. When we compute the event Evt, we apply it with the confidence C(Evt) value of (1-et), as described in the fuzzy training logic. In result if the Evt is true, the weight of the true outcome gets multiplied by (1-et) and the weight of the false outcome gets multiplied by et. If the event is false, the multiplication goes the opposite way.

In the end we compute the probability of the true outcome from the final weights: W(true)/(W(true)+ W(false)). If it's over 0.5, the true outcome wins. This logic is exactly the same as in the function H(x) of the AdaBoost algorithm.

Why is the (1-et) used as the confidence value? Since it's the fraction of the training cases that got predicted correctly by Evt, it makes sense that we're only so much confident in this event. Well, for the first event it is, but for the following events (1-et) is computed from a modified distribution of the events, with the adjusted weights. Does it still make sense?

As it turns out, it does. To find out why, we need to look at the weights of the training cases after they pass the filter of the first event. Instead of looking at the training table with two composite rows we need to step back and look at the table with the M original uncombined training cases:

CaseId Weight Outcome  Ev1 ... EvT
1       1 *    true     1   ... 1
M       1 *    false    0   ... 0

The cases that have the outcome of true still have 1 for all the events, and the one with the false outcome have 0 for all the events. In case if we run the algorithm for an input that matches a particular training case, Ev1 will predict the outcome correctly for some of the training cases and incorrectly for the others. When it predicts correctly, the weight of this training case will be multiplied by the confidence C(Ev1) and will become (1-e1). But when it predicts incorrectly, the weight will be multiplied by e1. The distribution will become skewed. We want to unskew it for computing the confidence of Ev2, so we compensate by multiplying the weight of the cases that got predicted incorrectly by (1-e1)/e1. Lo and behold, it's the same thing that is done in AdaBoost:

     if ht(xi) != yi; {
          Dt+1(i) = Dt(i) * (1-et) / et
     } else {
          Dt+1(i) = Dt(i)

So it turns out that on each step of AdaBoost the distribution represents the compensation of the weight adjustment for application of the previous events, and the value (1-et) is the proper adjusted fraction of the training cases that got predicted correctly. Amazing, huh? I couldn't have come up with this idea from scratch, but given an existing formula, it all fits together.

What is the benefit of looking at AdaBoost from the Bayesian standpoint? For one, we get the probability value for the result. For another one, it looks like AdaBoost can be slightly improved by changing the initial weights of true and false outcomes from 1:1 to their actual weights in the M training cases. That's an interesting prediction to test. And for the third one, maybe it can be used to improve the use of AdaBoost for the multi-class classifications. I haven't read the book that far yet, I want to understand the current state of affairs before trying to improve on it.

Saturday, May 28, 2016

Summary of Bayes by weight

This is a copy of the post from my MSDN blog. If you've been reading this blog, you've already seen all the ideas described here. But I've realized that the repost can be useful on this blog for the references, as a short summary of the most interesting parts. So, here it goes.

I recently wrote a series of posts in my other blog on the Bayes expert system. Some time ago I wrote an expert system, and I've been surprised by how little of the information is available on the Internet and how much of it is misleading, so I wanted to make a write-up of my experience, and recently I've finally found time to do it. It turned out larger than I expected, and as I was writing it I myself have gained a deeper understanding of my previous experience and came up with further better ideas. The text starts from the very basics of the Bayesian formula and goes through its use in the expert systems, the typical pitfalls when building the expert systems and the solutions to these pitfalls, the ways to handle the uncertainties, a deeper dive into how and why these formulas actually work, more of the practical solutions based on that knowledge, and the aspects of the testing.

The whole series is here (in the reverse order):
The first post is here:
They are not interspersed with any other posts, so you can read sequentially.

Right in the first post I refer to the Yudkowsky's explanation, so why would you want to read my explanation rather than his? By all means, read his explanation too, it's a good one. But he explains it in a different context. He takes one application of the Bayesian formula and works through it. So do many other texts on the artificial intelligence. The expert systems don't apply the formula once. They apply the formula thousands of times to make a single diagnosis. Which brings its own problems of scale that I describe. And there are many other practical issues. Your input data might be "impossible" from the standpoint of probabilities in your knowledge base. The evidence of absence is not the same as the absence of evidence. The events are rarely completely independent. Multiple hypotheses might be valid at the same time. And so on.

The particularly surprising realization for me was that the Bayesian systems and the decision trees are fundamentally the same thing. You can represent either one through the other one. They are traditionally used in somewhat different ways but that's just the question of tuning the parameters. They can be easily tuned either way or anywhere in between. This stuff is in the part 11 of my notes. There it requires the context from the previous parts, but here is the short version as a spoiler. The short version might be not very comprehensible due to its shortness but at least it points to what to look for in the long version.

So, it starts with the basic Bayes formula where the probability of some hypothesis H after taking into account the information about the event E is:

P(H|E) = P(H) * P(E|H) / P(E)

The hypotheses can also be thought of as diagnoses, and the events as symptoms.

In a typical expert system there can be hundreds of hypotheses and hundreds of events to consider. The values of P(E|H) are taken from the table that is computed from the training data. The values of P(H) and P(E) change as the events are applied one after another (P(H|E) for one event becomes P(H) for the next event, and P(E) gets computed from the complete set of values for P(H) and P(E|H)) but their initial values are also sourced from the same table. At the end one or multiple most probable hypotheses are chosen as the diagnosis.

What is the training data? It's the list of the previously diagnosed cases, where both the correct winning hypotheses and the values of the events are known. In the simplest way it can be thought of as a bitmap where each row has a hypothesis name and the bits for every event, showing if it was true or false:

    E1  E2  E3  E4
H1   1   0   0   0
H1   1   0   0   0
H1   0   0   0   1
H2   0   1   1   0
H2   0   1   1   0

In reality one case may have a diagnosis of multiple hypotheses, and the values of the events might be not just 0 and 1 but a number in the range between 0 and 1: we might not be completely confident in some symptom but have say a 0.7 confidence that it's true.

How is the table of probabilities built from the training data? For P(H) we take the proportion of the number of cases with this hypothesis to the total number of cases. For P(E|H) we take all the cases for the hypothesis H and average all the values for the event E in them. For P(E) we average all the values for E in all the cases in the whole training data.

As it turns out, this approach doesn't always work so well. Consider the following training data:

    E1 E2
H1   0  0
H1   1  1
H2   1  0
H2   0  1

The meaning of the data is intuitively clear, it's H1 if E1 and E2 are the same and H2 if E1 and E2 are different. But when we start computing P(E|H), all of them end up at 0.5 and the resulting expert system can't tell anything apart.

There is a solution: for the duration of the computation, split each hypothesis into two, say H1A and H1B:

    E1 E2
H1A  0  0
H1B  1  1
H2A  1  0
H2B  0  1

Before making the final decision, add up the probabilities P(H1)=P(H1A)+P(H1B) and use them for the decision. Now the logic works. The hypotheses can be split down to the point where each case in the training data becomes its own sub-hypothesis. Indeed the current example had done exactly this. If there are multiple cases that are exactly the same, we could keep them together by assigning a weight instead of splitting them into the separate sub-hypotheses. For example, if there are 5 cases that are exactly the same, we can make them one sub-hypothesis with the weight of 5.

And with such a fine splitting the computation of probabilities can be thought of as striking out the sub-hypotheses that don't match the incoming events. If we're computing the diagnosis and receive E1=1, we can throw away H1A and H2B, leaving only H1B and H2A. If then we receive E2=0, we can throw away H2A, and the only hypothesis left, H1B, becomes the final diagnosis that we'll round up to H1.

We're essentially trying to find a match between the current case and one of the training cases. But that's exactly what the decision trees do! We can represent the same table as a decision tree:

        0   E1    1
        |         |
        V         V
     0 E2  1   0 E2  1
     +--+--+   +--+--+
     |     |   |     |
     V     V   V     V
    H1A   H2B H2A   H1B

As you can see, it's equivalent, produces the exact same result on the same input.

But what if we're not entirely confident in the input data? What if we get E1 with the confidence C(E1)=0.7? The way the Bayesian formula

P(H|C(E)) = P(H) * ( C(E)*(0+P(E|H)) + (1-C(E))*(1-P(E|H)) )
  / ( C(E)*(0+P(E)) + (1-C(E))*(1-P(E)) )

treats it amounts to "multiply the weight of the cases where E1=1 by 0.7 and multiply the weight of the cases where E1=0 by 0.3". So in this example we won't throw away the cases H1A and H2B but will multiply their weights by 0.3 (getting 0.3 in the result). H1B and H2A don't escape unscathed either, their weights get multiplied by 0.7. Suppose we then get E2=0.8. Now the weights of H1A and H2A get multiplied by 0.2, and of H1B and H2B get multiplied by 0.8. We end up with the weights:

W(H1A) = 1*0.3*0.2 = 0.06
W(H1B) = 1*0.7*0.8 = 0.56
W(H2A) = 1*0.7*0.2 = 0.14
W(H2B) = 1*0.3*0.8 = 0.24

When we add up the weights to full hypotheses, W(H1)=0.62 and W(H2)=0.38, so H1 wins (although whether it actually wins or we consider the data inconclusive depends on the boundaries we set for the final decision).

Can this be represented as a decision tree? Sure! It just means that when we make a decision at each event node we don't choose one way out. We choose BOTH ways out but with different weights. If the weight of some branch comes down to 0, we can stop following it, but we faithfully follow all the other branches until the come down to the leaves, and keep track of the weights along the way.

Now, what if the table contains not just 0s and 1s but the arbitrary values between 0 and 1? This might be because some training cases had only a partial confidence in their events. Or it might be because we averaged the events of multiple training cases together, building the classic Bayesian table with one line per hypothesis.

We can still compute it with weights. We can just logically split this case into two cases with the partial weights. If we have a value of 0.7 for the hypothesis H and event E, we can split this line into two, one with 1 in this spot and the weight of 0.7, another one with 0 in this spot and the weight of 0.3. And we can keep splitting this case on every event. For example, if we start with

    E1  E2
H1  0.7 0.1

we can first split it into 2 cases on E1:

     E1  E2   W
H1A  1   0.1  0.7
H1B  0   0.1  0.3

And then further split on E2:

     E1  E2   W
H1AA 1   1    0.07
H1AB 1   0    0.63
H1BA 0   1    0.03
H1BB 0   0    0.27

Or we can modify the table as we apply the events: right before applying the event, split each row in the table in twain based on the values in it for this event, apply the event by multiplying the weights appropriately, and then collapse the split rows back into one by adding up their weights. This makes the intermediate values more short-lived and saves on their upkeep. And that's exactly the logic used in the classic Bayesian formula.

Since once the table rows get split, the operations on the table stay exactly the same, they still can be represented with the decision tree. Just the splitting of the rows in the table would be transformed to the splitting of nodes in the decision tree.

In the end the Bayesian logic and the decision trees are equivalent. They do the same things. Traditionally the Bayesian logic uses the training cases that have been averaged out while the decision trees try to match to the individual training cases. But both can be used in either way. And it's possible to get a mix of both by partitioning the weights between the original training cases and their combined averaged versions. It's a smooth scale of 0 to 1: if you transfer all the weight of the training cases into averaging, you get the traditional Bayesian logic, if you transfer none of it, you get the traditional decision trees, and if you transfer say 0.3 of the weight, you get the logic that is a combination of 0.3 of the traditional Bayesian logic and of 0.7 of the traditional decision trees. Note I've been saying "traditional" because they are really equivalent and can be transformed into each other.

Sunday, May 22, 2016

AdaBoost in simpler formulas 2

I've been reading along the book on boosting, and I'm up to about 1/3 of it :-) I've finally realized an important thing about how the H(x) is built.

For easy reference, here is another copy of the AdaBoost algorithm from the previous installment, simplified slightly further by replacing (1-et)/et with Wgoodt/Wbadt, and et/(1-et) with Wbadt/Wgoodt as mentioned at the end of it and getting rid of et altogether:

Given: (x1, y1), ..., (xm, ym) where xi belongs to X, yi belongs to {-1, +1}.
Initialize: D1(i) = 1 for i = 1, ..., m.
For t = 1, ..., T:
  • Train the basic algorithm using the weights Dt.
  • Get weak hypothesis ht: X -> {-1, +1}.
  • Aim: select ht to minimalize the weighted error Wbadt/Wgoodt:
    Wgoodt = 0; Wbadt = 0;
    for i = 1, ..., m {
         if ht(xi) = yi {
              Wgoodt += Dt(i)
         } else {
              Wbadt += Dt(i)
  • Update,
    for i = 1, ..., m {
         if ht() != yi; {
              Dt+1(i) = Dt(i) * sqrt(Wgoodt/Wbadt)
         } else {
              Dt+1(i) = Dt(i) * sqrt(Wbadt/Wgoodt)
Output the final hypothesis:
H(x) = sign(sum for t=1,...,T (ln(sqrt(Wgoodt/Wbadt))*ht(x)) ).

I've been wondering, what's the meaning of ln() in the formula for H(x). Here is what it is:

First of all, let's squeeze everything into under the logarithm. The first step would be to put ht(x) there.

ln(sqrt(Wgoodt/Wbadt))*ht(x) = ln(sqrt(Wgoodt/Wbadt)ht(x))

This happens by the rule of ln(a)*b = ln(ab).

Since ht(x) can be only +1 or -1, taking the value to the power of it basically means that depending on the result of the ht(x) the value be either taken as-is or 1 divided by it. Which is the exact same thing that is happening in the computation of Dt+1(i). The two formulas are getting closer.

The next step, let's stick the whole sum under the logarithm using the rule ln(a)+ln(b) = ln(a*b):

H(x) = sign(ln( product for t=1,...,T ( sqrt(Wgoodt/Wbadt)ht(x) ) ))

The expression under the logarithm becomes very similar to the formula for Dt+1(i) as traced through all the steps of the algorithm:

Dt+1(i) = product for t=1,...,T ( sqrt(Wgoodt/Wbadt)-yt*ht(x) )

So yeah, the cuteness of expressing the condition as a power comes handy. And now the final formula for H(x) makes sense, the terms in it are connected with the terms in the computation of D.

The next question, what is the meaning of the logarithm? Note that its result is fed into the sign function. So the exact value of the logarithm doesn't matter in the end result, what matters is only if it's positive or negative. The value of logarithm is positive if its agrument is > 1, and negative if it's < 1. So we can get rid of the logarithm and write the computation of H(x) as:

if ( product for t=1,...,T ( sqrt(Wgoodt/Wbadt)ht(x) ) > 1 ) then H(x) = +1 else H(x) = -1

Okay, if it's exactly = 1 then H(x) = 0 but we can as well push it to +1 or -1 in this case. Or we can write that

H(x) = sign( (product for t=1,...,T ( sqrt(Wgoodt/Wbadt)ht(x) ) - 1 )

The next thing, we can pull the square root out of the product:

if (sqrt( product for t=1,...,T (Wgoodt/Wbadtht(x)) ) > 1 ) then H(x) = +1 else H(x) = -1

But since the only operation on its result is the comparison with 1, taking the square root doesn't change the result of this comparison. If the argument of square root was > 1, the result will still be >1, and the same for < 1. So we can get rid of the square root altogether:

if ( product for t=1,...,T (Wgoodt/Wbadtht(x) ) > 1 ) then H(x) = +1 else H(x) = -1

The downside of course is that the computation becomes unlike the one for Dt+1(i). Not sure yet if this is important or not.

Either way, we can do one more thing to make the algorithm more readable, we can write the product as a normal loop:

chance = 1;
for t=1,...,T {
     if (ht(x) > 0) {
          chance *= Wgoodt/Wbadt;
     } else
          chance *= Wbadt/Wgoodt;
H(x) = sign(chance - 1)

Note that this code runs not at the training time but later, at the run time, with the actual input data set x. When the model runs, it computes the actual values ht(x) for the actual x and computes the result H(x).

I've named the variable "chance" for a good reason: it represents the chance that H(x) is positive. The chance can be expressed as a relation of two numbers A/B. The number A represents the positive "bid", and the number B the negative "bid". The chance and probability are connected and can be expressed though each other:

chance = p / (1-p)
p = chance / (1+chance)

The chance of 1 matches the probability of 0.5. Initially we have no knowledge about the result, so we start with the chance of 1, and with each t the chance gets updated according to the hypothesis picked on that round of boosting.

The final thing to notice is that in the Bayesian approach we do a very similar thing: we start with the prior probabilities (here there are two possible outcomes, with the probability 0.5 each), and then look at the events and multiply the probability accordingly. At the end we see which hypothesis wins. Thus I get the feeling that there should be a way to express the boosting in the Bayesian terms, for a certain somewhat strange definition of events. Freund and Shapire describe a lot of various ways to express the boosting, so why not one more. I can't quite formulate it yet, it needs more thinking. But the concept of "margins" maps very nicely to the Bayesian approach.

In case if you wonder what the margins are: as the rounds of boosting go, after each round of boosting H(x) can be computed for each set of the training data xi. At some point they all start matching the training results yi, however the boosting can be run further, and more rounds can still improve the results on the test data set. This happens because the sign() function in H(x) collapses the details, and the further improvements are not visible on the training data. But if we look at the argument of sign(), such as the result of the logarithm in the original formula, we'll notice that they keep moving away from the 0 boundary, representing more confidence. This extra confidence then helps make better decisions on the test data. This distance between the result of the logarithm and 0 is called the margin. Well, in the Bayesian systems we also have the boundary of the probability (in the simplest case for two outcomes, 0.5), and when a Bayesian system has more confidence, it drives the resulting probabilities farther from 0.5 and closer to 1. Very similar.

Saturday, April 9, 2016

career advice 3

As I've mentioned before, I've read the book "Friend & Foe" by Adam Galinsky and Maurice Schweitzer, and I went to see their talk too. It's an interesting book on the psychology of competition and cooperation. But some examples from it and their interpretation struck me as odd.

In particular, they have a story of Hannah Riley Bowles who got an offer for a tenured position from the Nazareth College, decided to negotiate for some better conditions, and had the offer retracted (to her surprise). Perhaps she'd followed the advice similar to Tarah Van Vleck's: "never accept the first offer". What went wrong? According to Galinsky and Schweitzer, there must be some evil afoot, it's all because she's a woman.

But is it? The whole story as described in the book strikes me as a sequence of bad decisions. I'm not the greatest expert on negotiations by far but I know a thing or two about them. For example, I've negotiated for a year about one of my jobs. For another job, I've negotiated through 3 different engagements over 6 years. And basically in the story about Hannah I see both the glaring mistakes on her part and the mistaken pre-suppositions in the narrative.

The major mistake in the narrative is that by negotiations you cannot lose ("never accept the first offer, it's always lower by 10-15% than the final offer"). It's not true. If you decide not to accept the first offer but start the negotiations, you have to be prepared that the other side would turn around and go away. It has no relation to gender, happens to everyone, and is nothing unusual. It's something you've got to be prepared to. I've had it happen on multiple occasions in my career.

If doesn't mean that you shouldn't negotiate. The negotiation of the future salary at a new place is the best way and time to raise your income, after that a 10% raise will be considered a huge one. A 10% raise did happen to me once, but again, it was a very unusual thing, and much easier achieved by negotiating at the hiring time. But you've got to place a realistic goal in front of you, what kind of raise would be worth changing the jobs, and go from there. If the offer is way below this goal, there is no point in taking it. If it's way above, you've achieved your goal, there's not much more to desire. Extra negotiation can bring extra money but can also break the whole thing. If someone offers you double the current money, it's probably reasonable to just take the offer and not risk it.

And yes, the offer of double money did happen to me but it came with a catch: it was for a 6-month contract, with a long commute and not a particularly exciting job. So it wasn't a no-brainer, it took me some thinking. In the end I've decided that if I get double the money for 6 months and then spend another 6 months looking for another job like my current one, I'd be still ahead, and I took it (and in result the things have worked out better than expected).

To give an example of when things didn't work out, let's look at that 6-year negotiation story. When I've talked to them for the first time, I went there for an interview, I've talked to the HR about what kind of money I want, and a week later I get a form letter saying that they're not interested. Well, overall fine with me (except for one point that I'll return to later), they didn't look particularly interesting anyway. When their recruiter contacted me next time, I've asked them: you people didn't like me once already, why are you calling me again? And he said, no, the technical interviews actually are marked pretty good, so it's got to be some other reason. From which I could only conclude that the money was the problem.  And then I've tried a bit different approach to find out what kind of money they had in mind, and it turned out that yes, there was a major disagreement. But the important point for Hannah's story is that they didn't make me an offer for half the money I was asking, they just turned around and went away.

Making another digression, this kind of confirms Tarah Van Vleck's advice "never name your number first". Or does it? Remember, in this case our expectations have been off by a factor of 2. If they made me an offer for half the money I thought reasonable, I wouldn't have taken it anyway, just as I didn't take when I found it out during the second engagement. By the way, yes, there are disadvantages of naming your number first but there are also are other issues, and there are some advantages too: if you overshoot their expectations by a reasonable amount, you'll have a lot easier time in defending this number in the further negotiations. If they name a number and you say "I want 10% more", they'll figure out that you're just trying to stretch it a little, and they might either stay firm or maybe settle at something like 5% more. If you name a number 20% more than they were expecting to offer, you'll probably get if not all 20% then at least 15%. And it's not just me, I've also read it in some book (Galinsky&Schweitzer's? Cialdini's? Karras's?) that the first number named sets the tone for the negotiation, which is difficult to move afterwards. It can be moved but not by 15%, if you want to make progress you've got to start with something like "this is laughable! my reasonable estimation is 50% more!" and get maybe extra 30-45%. And of course bear the risk that the other side would go away, so I'd recommend doing this only if you really do see the initial offer as laughable.

If the other side thinks that your demands are unreasonably high (or low, for the other side, and yes, I've done things like that from my side as well), they'll just go away. But of course from my standpoint the requests have been perfectly reasonable, I would not have agreed to their low-ball offer anyway, so I haven't lost anything. This is a problem only if you're bluffing.
Now turning to Hannah's mistakes. Sorry, but she led the negotiations in a very offensive way, as offensive as it can get without calling the prospective employer names.

The first major mistake was that she responded by writing of a letter with the list of requests, and in such a formal tone. Negotiation in the written form is bad, it's highly prone to cause very negative feelings in the counterparty. The good way to negotiate is over the phone.

The use of the formal tone is even worse. It's guaranteed to offend. Returning to that example above, receiving that form letter had pissed me off very much. If they simply said "No, we're not interested" or "No, we're not interested, we don't think you're good enough for us", it would have been OK. But receiving a page-long form letter in legalese created a major grudge. For a few years after that I wouldn't even talk to the their recruiters.

The right way to negotiate is on the phone, and try to keep as friendly a tone as possible. The point of negotiations is to convince the other party that your viewpoint is more reasonable, not to fight them.

This brings us to the next error, but here Hannah had no control: she had to negotiate directly with the college because the college had contacted her directly. The negotiations go much, much better when conducted through an intermediary.  An independent recruiting agent is the best intermediary, the company recruiter is the second best one. Negotiating directly with the hiring manager, as Hannah essentially did, is fraught with peril. The recruiters are the professional negotiators, they understand how the negotiations work, and transfer the information between two parties while maintaining friendliness on both sides. You can talk more bluntly to them, and when the message reaches the other side, it will become formatted in a friendly way. On the other hand, the hiring managers tend to take offense easily. Many of them are technical specialists but not really people persons, and for quite a few of them the feeling self-importance goes strongly to their head. Might be even worse in academia than in the industry, at least judging by what I read. The even worse part is that she had to deal with a committee. The problem with committees is that there is a higher probability that at least one member will be a self-important moron who will take offense.

Ironically, this went so bad because from the tone of the letter Hannah doesn't appear to be a people person either, but one with the self-importance gone to her head. It's hard enough to negotiate when one side has this attitude, and much harder when both sides do. For all I understand, the tenure positions are coveted in academia, so when the committee made an offer to Hannah, they likely felt that they're making her an honor. Which is expected to be accepted humbly.  Responding to the offer with the words "Granting some of the following provisions will make my decision easier" is the opposite of humility. It's the negotiation from the position of power, implying that they've made a humble supplication of her, and she is considering whether to grant their wish. I hope you can see by now how they felt offended.

As you can see, great many things went wrong with Hannah's negotiation, and none of them have anything to do with her gender. All of them had to do with the communication mistakes, character of the people involved, pride and prejudice of academic nature, and lack of an experienced intermediary to calm down the tempers.

What could Hannah had done better? I'd recommend first thing going there, looking at the place, and meeting the people. A personal contact always makes the following remote communications much more personable. And then making her requests either in a face-to-face meeting or over the phone. Making them in a personable tone of requests, not demands. Like "hey, and how does such a thing run at your college? would it be OK if I do it like this?". Perhaps, making some of the requests through the HR department people. And what could have the college done better? After the hiring committee had made the decision, they could have used a professional recruiter from HR to communicate between the committee and Hannah.

Of course, yet another way to look at it is "do you want to work with people like this?". The point of the interview is that not only candidate is a good fit for the company but also that the company is a good fit for the candidate. If you think that the company behaves unreasonably in response to your reasonable requests, it's probably best not to work there: obviously, your ideas of what is reasonable differ widely.

And this also brings the point about whether the women are undeservedly seen as too aggressive. I'd say that Hannah's example demonstrates exactly the kind of over-aggressiveness. It's not that she tried to negotiate for the better conditions, it's HOW she tried to do it. Instead of building the mutual rapport and convincing the counterparty of her goals in a friendly way, she saw it as a fight. It's not the perception of the aggression that is the problem, the problem is in the aggression that is actually present.

I wonder if it might also be connected to another effect about negotiations. As described in the book "The negotiating game" by Karrass, and as I can anecdotically confirm from my experience, when a good negotiator gets the major thing he wants, he goes soft on the opponent and doesn't mind giving up some minor points, to keep the relationship happier. On the other hand, the poor negotiators keep hammering non-stop even if they've got the negotiating power and already managed the good conditions, they still keep trying to squeeze everything possible out of the opponent. Perhaps the second case is the same character trait that is seen as high aggression, the irony being that the higher aggression brings less success.