Well...
In causation logic, you use sentences like (example in the paper) "pneumonia causes chest pain." This is different from the basic logical 'implies' relationship, because causation usually means that the state evolves over time ("pneumonia implies chest pain" would have pneumonia and chest pain at the same moments in time) and also because it implies that the chest pain wasn't there before the pneumonia (first order logic makes no such assumption).
In probabilistic logic, you think about sentences like "pneumonia might cause chest pain." It's a bit the same as the causes-relationship, but you also associate a chance to each relationship. Typical stuff you'll have is "with probability 0.7, the operation cures the patient, and with probability 0.01, the patient dies." The remaining 0.29 is a "don't care" scenario (for example, not cured and not death).
You can then put all sorts of these relationships in a complex network (a causes b and c, c causes d, d and b cause e or f, ...) to see what the end results could be, and their probabilities.
Of course, in real life you don't know the probabilities of each relationship. So the system uses a set of examples (patient A had pneumonia, chest pain, and didn't survive the operation, patient B...) and tries to guess ('learn') the probabilities from that. Afterwards, the system can be used to categorize new examples (patient Z has chest pain, what are the chances she has pneumonia and will survive the operation?)
The causation part seems pretty unique to this system. I'm not yet sure what makes it stand out against the 100s of other probabilistic logic learning systems, but I suppose I'll find out after reading more.
|