This is a copy of the post from my MSDN blog. If you've been reading this blog, you've already seen all the ideas described here. But I've realized that the repost can be useful on this blog for the references, as a short summary of the most interesting parts. So, here it goes.
(P.S. See also a simpler later explanation.)
I recently wrote a series of posts in my other blog on the Bayes expert system. Some time ago I wrote an expert system, and I've been surprised by how little of the information is available on the Internet and how much of it is misleading, so I wanted to make a write-up of my experience, and recently I've finally found time to do it. It turned out larger than I expected, and as I was writing it I myself have gained a deeper understanding of my previous experience and came up with further better ideas. The text starts from the very basics of the Bayesian formula and goes through its use in the expert systems, the typical pitfalls when building the expert systems and the solutions to these pitfalls, the ways to handle the uncertainties, a deeper dive into how and why these formulas actually work, more of the practical solutions based on that knowledge, and the aspects of the testing.
The whole series is here (in the reverse order): http://babkin-cep.blogspot.com/search/label/bayes
The first post is here: http://babkin-cep.blogspot.com/2015/10/bayes-1-basic-terms.html
They are not interspersed with any other posts, so you can read sequentially.
Right in the first post I refer to the Yudkowsky's explanation, so why would you want to read my explanation rather than his? By all means, read his explanation too, it's a good one. But he explains it in a different context. He takes one application of the Bayesian formula and works through it. So do many other texts on the artificial intelligence. The expert systems don't apply the formula once. They apply the formula thousands of times to make a single diagnosis. Which brings its own problems of scale that I describe. And there are many other practical issues. Your input data might be "impossible" from the standpoint of probabilities in your knowledge base. The evidence of absence is not the same as the absence of evidence. The events are rarely completely independent. Multiple hypotheses might be valid at the same time. And so on.
The particularly surprising realization for me was that the Bayesian systems and the decision trees are fundamentally the same thing. You can represent either one through the other one. They are traditionally used in somewhat different ways but that's just the question of tuning the parameters. They can be easily tuned either way or anywhere in between. This stuff is in the part 11 of my notes. There it requires the context from the previous parts, but here is the short version as a spoiler. The short version might be not very comprehensible due to its shortness but at least it points to what to look for in the long version.
So, it starts with the basic Bayes formula where the probability of some hypothesis H after taking into account the information about the event E is:
P(H|E) = P(H) * P(E|H) / P(E)
The hypotheses can also be thought of as diagnoses, and the events as symptoms.
In a typical expert system there can be hundreds of hypotheses and hundreds of events to consider. The values of P(E|H) are taken from the table that is computed from the training data. The values of P(H) and P(E) change as the events are applied one after another (P(H|E) for one event becomes P(H) for the next event, and P(E) gets computed from the complete set of values for P(H) and P(E|H)) but their initial values are also sourced from the same table. At the end one or multiple most probable hypotheses are chosen as the diagnosis.
What is the training data? It's the list of the previously diagnosed cases, where both the correct winning hypotheses and the values of the events are known. In the simplest way it can be thought of as a bitmap where each row has a hypothesis name and the bits for every event, showing if it was true or false:
E1 E2 E3 E4 H1 1 0 0 0 H1 1 0 0 0 H1 0 0 0 1 H2 0 1 1 0 H2 0 1 1 0
In reality one case may have a diagnosis of multiple hypotheses, and the values of the events might be not just 0 and 1 but a number in the range between 0 and 1: we might not be completely confident in some symptom but have say a 0.7 confidence that it's true.
How is the table of probabilities built from the training data? For P(H) we take the proportion of the number of cases with this hypothesis to the total number of cases. For P(E|H) we take all the cases for the hypothesis H and average all the values for the event E in them. For P(E) we average all the values for E in all the cases in the whole training data.
As it turns out, this approach doesn't always work so well. Consider the following training data:
E1 E2 H1 0 0 H1 1 1 H2 1 0 H2 0 1
The meaning of the data is intuitively clear, it's H1 if E1 and E2 are the same and H2 if E1 and E2 are different. But when we start computing P(E|H), all of them end up at 0.5 and the resulting expert system can't tell anything apart.
There is a solution: for the duration of the computation, split each hypothesis into two, say H1A and H1B:
E1 E2 H1A 0 0 H1B 1 1 H2A 1 0 H2B 0 1
Before making the final decision, add up the probabilities P(H1)=P(H1A)+P(H1B) and use them for the decision. Now the logic works. The hypotheses can be split down to the point where each case in the training data becomes its own sub-hypothesis. Indeed the current example had done exactly this. If there are multiple cases that are exactly the same, we could keep them together by assigning a weight instead of splitting them into the separate sub-hypotheses. For example, if there are 5 cases that are exactly the same, we can make them one sub-hypothesis with the weight of 5.
And with such a fine splitting the computation of probabilities can be thought of as striking out the sub-hypotheses that don't match the incoming events. If we're computing the diagnosis and receive E1=1, we can throw away H1A and H2B, leaving only H1B and H2A. If then we receive E2=0, we can throw away H2A, and the only hypothesis left, H1B, becomes the final diagnosis that we'll round up to H1.
We're essentially trying to find a match between the current case and one of the training cases. But that's exactly what the decision trees do! We can represent the same table as a decision tree:
| V 0 E1 1 +----+----+ | | V V 0 E2 1 0 E2 1 +--+--+ +--+--+ | | | | V V V V H1A H2B H2A H1B
As you can see, it's equivalent, produces the exact same result on the same input.
But what if we're not entirely confident in the input data? What if we get E1 with the confidence C(E1)=0.7? The way the Bayesian formula
P(H|C(E)) = P(H) * ( C(E)*(0+P(E|H)) + (1-C(E))*(1-P(E|H)) ) / ( C(E)*(0+P(E)) + (1-C(E))*(1-P(E)) )
treats it amounts to "multiply the weight of the cases where E1=1 by 0.7 and multiply the weight of the cases where E1=0 by 0.3". So in this example we won't throw away the cases H1A and H2B but will multiply their weights by 0.3 (getting 0.3 in the result). H1B and H2A don't escape unscathed either, their weights get multiplied by 0.7. Suppose we then get E2=0.8. Now the weights of H1A and H2A get multiplied by 0.2, and of H1B and H2B get multiplied by 0.8. We end up with the weights:
W(H1A) = 1*0.3*0.2 = 0.06 W(H1B) = 1*0.7*0.8 = 0.56 W(H2A) = 1*0.7*0.2 = 0.14 W(H2B) = 1*0.3*0.8 = 0.24
When we add up the weights to full hypotheses, W(H1)=0.62 and W(H2)=0.38, so H1 wins (although whether it actually wins or we consider the data inconclusive depends on the boundaries we set for the final decision).
Can this be represented as a decision tree? Sure! It just means that when we make a decision at each event node we don't choose one way out. We choose BOTH ways out but with different weights. If the weight of some branch comes down to 0, we can stop following it, but we faithfully follow all the other branches until the come down to the leaves, and keep track of the weights along the way.
Now, what if the table contains not just 0s and 1s but the arbitrary values between 0 and 1? This might be because some training cases had only a partial confidence in their events. Or it might be because we averaged the events of multiple training cases together, building the classic Bayesian table with one line per hypothesis.
We can still compute it with weights. We can just logically split this case into two cases with the partial weights. If we have a value of 0.7 for the hypothesis H and event E, we can split this line into two, one with 1 in this spot and the weight of 0.7, another one with 0 in this spot and the weight of 0.3. And we can keep splitting this case on every event. For example, if we start with
E1 E2 H1 0.7 0.1
we can first split it into 2 cases on E1:
E1 E2 W H1A 1 0.1 0.7 H1B 0 0.1 0.3
And then further split on E2:
E1 E2 W H1AA 1 1 0.07 H1AB 1 0 0.63 H1BA 0 1 0.03 H1BB 0 0 0.27
Or we can modify the table as we apply the events: right before applying the event, split each row in the table in twain based on the values in it for this event, apply the event by multiplying the weights appropriately, and then collapse the split rows back into one by adding up their weights. This makes the intermediate values more short-lived and saves on their upkeep. And that's exactly the logic used in the classic Bayesian formula.
Since once the table rows get split, the operations on the table stay exactly the same, they still can be represented with the decision tree. Just the splitting of the rows in the table would be transformed to the splitting of nodes in the decision tree.
In the end the Bayesian logic and the decision trees are equivalent. They do the same things. Traditionally the Bayesian logic uses the training cases that have been averaged out while the decision trees try to match to the individual training cases. But both can be used in either way. And it's possible to get a mix of both by partitioning the weights between the original training cases and their combined averaged versions. It's a smooth scale of 0 to 1: if you transfer all the weight of the training cases into averaging, you get the traditional Bayesian logic, if you transfer none of it, you get the traditional decision trees, and if you transfer say 0.3 of the weight, you get the logic that is a combination of 0.3 of the traditional Bayesian logic and of 0.7 of the traditional decision trees. Note I've been saying "traditional" because they are really equivalent and can be transformed into each other.
No comments:
Post a Comment