Let's look at what kind of training data (admittedly, pulled from the thin air) I used to create the table in the last example. To remind, the table was:
!,,evA,evB,evC hyp1,0.66667,1,0.66667,0.66667 hyp2,0.33333,0,0,1
Basically, you start by writing down the known cases. Some of them might have happened in the same way more than once, so instead of writing down the same case multiple times, you just write the count of this kind of cases.
I've made up the following imaginary cases:
---- evA evB evC 4 * hyp1 1 1 1 2 * hyp1 1 0 0 3 * hyp2 0 0 1
9 cases total, with hypothesis hyp1 happeing with two different combinations of symptoms (4 and 2 cases), and the hypothesis hyp2 happening with one combination of symptoms (3 cases).
The prior probabilities P(H) of hypotheses get computed by dividing the number of cases with this hypothesis by the total number of cases:
P(hyp1) = (4+2)/(4+2+3) = 6/9 = 0.66667 P(hyp2) = 3/(4+2+3) = 3/9 = 0.33333
The conditional probabilities of the events P(E/H) are computed by dividing the number of cases where this event is true for this hypothesis by the total number of the cases for this hypothesis:
P(evA|hyp1) = (4+2)/(4+2) = 6/6 = 1 P(evB|hyp1) = 4/(4+2) = 4/6 = 0.66667 P(evC|hyp1) = 4/(4+2) = 4/6 = 0.66667 P(evA|hyp2) = 0/3 = 0 P(evB|hyp2) = 0/3 = 0 P(evC|hyp2) = 3/3 = 1
These numbers go into the table. Very simple. Of course, with the real-sized data sets of many thousands of cases you'd have to do a lot more calculations but it's fundamentally straightforward. At least for this basic approach. There are more complicated approaches, we'll get to them later.
No comments:
Post a Comment