Sometimes the diagnosis depends on the features of the device being diagnosed. For example, you wouldn't want to diagnose the power steering fluid leak in a car with the electric power steering (or no power steering at all).
The typical thing would be to at least ask the make and model of the car up front and enter it as the events into the diagnosis. How would it be entered as events? The straightforward way would be to define one event per model. Then when diagnosing a car, the event for its model will be true, and for the rest of the models false.
This would also mean that every single training case will be model-specific. And even if we fold multiple training cases into the per-hypothesis entry, it still will include the information about what models experienced this failure, and at what prior probability.
In one way, this is good. It means that we will diagnose this hypothesis only for the models where this failure can actually occur. In another way, it's bad. For some rarer models we may not have any training cases showing that yes, this problem can happen for this model. There might be plenty of Volvos in the training data to show all their possible problems but not enough Aston Martins to show any but the most frequent problems. Yet the Aston Martins are mechanically similar to the other cars, and much of knowledge about the other cars can be applied to the Aston Martins too. So should we take the model into account or not?
Its hard to tell for sure for each particular case. But the approach of specifying the model information as events actually works fairly well together with the confidence capping. The confidence capping means that even if we can't find an exact match in training data, we'll still keep trying to find the best match, first with the mismatch of one event, then with the mismatch of two events, and so on. Finding a similar training case on another model means the mismatch of two events: the event for the model of that training case will be false instead of true in the input, and the event for the model of the actual car being examined will be true instead of false. So in essence it will mean that the fully matching training case for the same model will be preferred, but if there isn't one, it will become a choice between the otherwise fully matching case from another model and a partially matching case for this model.
So far so good but there are more caveats to keep in mind. This would not work well if we try aggressively to find the relevant symptoms/events of the training cases. We might end up with a training case for this particular model that has only the model information and one other event marked as relevant. Well, this event will be a mismatch but it's a mismatch on only one event (since the others are marked as irrelevant), so this case will take a priority over the cases for the other model but a matching symptom (since these case will have two mismatches in the model information events). A possible workaround for this is to not fold the model information into the events but just partition the training data into the separate per-model tables and a common table without the model information. Then look in the per-model table first, and if nothing useful is found there, look in the common table. This workaround has its own problems of course, such as what if there are multiple failures, one of which showing in the per-model table but the second one only in the common table? The second table computation might never happen because the first failure would be found in the per-model table and the processing will stop.
Another complication is that with the large number of events the uncertainty introduced by the certainty capping will accumulate fast. With a hundred events, and using 0.01 for the cap value, by the end of the processing the result will be completely uncertain. That's why the smaller values of the cap, such as 1e-8 are typically more useful.
You might think that if we do the computation directly by the training cases, we preserve the correlation between the events, so instead of having N events for N models, we can encode them as a binary number that can be represented with only log2(N) events. But that's a bad idea. It means that the code distance between the representations of the models will become varying. Instead of differing always by 2 events, they could differ by as few as 1 and as many as log2(N) events. That will seriously mess up the logic.
A possible workaround for that is to allow some events to have not 2 possible values (true or false) but a number of possible values, as many as hundreds or even thousands. The fuzzy logic can still be used with such events: they would be represented by two values, the confidence value in the range [0..1] and the enumeration value. We'd check if the enumeration value of the event in the input matches the enumeration value of the event in the training case, and then apply the confidence value appropriately, for or against this training case. It also means that the mismatching value of this event will be a mismatch of only one event, not two, and thus improve the standing of the training cases from other models with the capping logic.
A completely different take on this problem is to preprocess the information about the model and split it into the separate events that describe the mechanical features of the car. There might be a separate event for the type of power steering (none, or hydraulic, or electric), the type of the front brakes (disc or drum), the type of the rear brakes, and so on. The approach with the enumerated multiple-valued events can be used here as well, with for example the power steering type becoming one event with 3 possible values. However to use this approach effectively, we really need to handle the event relevance properly: the training cases for the problems in each subsystem must mark only the identifying information about this subsystem as relevant. You wouldn't want to fail recognizing a brake problem just because the car in the training case had a different type of power steering than the one being diagnosed. But this kind of relevance might be difficult to discover automatically, it would require the information about the subsystems and their relations to be added by the humans.
The good part about this splitting is that it would allow to diagnose the cars that have been modified by the owners. If someone installed the Lincoln disc brakes on the rear axle of a Fox-body Ford Mustang that had the drum brakes from the factory, this information can be entered as an override after entering the car model, changing the event for the type of the rear brakes. And the system will still be capable of diagnosing the problems. On the other hand, it might not help much: the aftermarket parts often have their own problems stemming from the mismatch of their characteristics with the original parts. For example, the said Lincoln brake conversion fits mechanically but causes the very wrong proportioning of the brake forces between the front and the rear. On the third hand, a sufficiently smart diagnostics system might be still useful. Even though the misproportioned brake forces do not happen in the cars from the factory, a smart enough system might still be able to point to a faulty master brake cylinder. It wouldn't be faulty as such but it would be faulty in its function for the modified brakes, distributing the brake forces in the wrong proportion.
No comments:
Post a Comment