Abduction and Abductive Logic Programming
Thinking as Computation, 11
Computational Logic and Human Thinking, 10, Appendix A6
Abduction is defeasible reasoning from effects to causes, e.g., from symptoms to disease. The idea is that given an observation (and a background theory), abduction is reasoning to an explanation. What counts as an explanation? That is a hard question to answer in detail, but the rough idea is that explanations provide information about causes.
(Abduction is defeasible reasoning because It can lead from true observations and other beliefs to false hypotheses (explanations).)
Abduction is important for modeling the intelligence of a rational agent. The explanations of observations may trigger maintenance goals that the observations themselves do not trigger.
Suppose that the agent makes the observation that the grass is wet. The problem is to explain the observation. There are many possible explanations, but in this part of the world (Tempe, Arizona) the most likely alternatives are either that it rained or that the sprinkler was on. How does the agent reason to these explanations?
One way to find these explanations is by reasoning backwards from the observation (treated as a goal) with beliefs about causal connections represented in the form
effect if cause
(Treating observations as goals extends the notion of goal. What justifies this? The extension exploits the fact that the two kinds of reasoning finding actions to achieve a goal and finding hypotheses to explain an observation can both be viewed as special cases of the reasoning used to solve the more abstract problem of finding assumptions to derive conclusions.)
Why is the grass wet?
Suppose that the beliefs about the casual connections are
the grass is wet if it
the grass is wet if the sprinkler was on.
In the KB, the grass is wet contains a "closed" predicate. This predicate has a (partial) definition. Something is wet if it rained or the sprinkler was on.
It rained and the sprinkler was on contain "open" predicates. They have no definition because they do not occur in the heads of clauses.
Open predicates provide the possible hypotheses for abduction.
In the example, backward reasoning from the observation that the grass is wet results in two possible explanations: either it rained or the sprinkler was on.
The problem now is to decide which is the best explanation. In general, this is difficult to do.
One way to help decide between the explanations is to use forward reasoning. Forward reasoning from alternative explanations can sometimes derive additional consequences that can be confirmed by past or future observations. The greater the number of such additional observations a hypothesis explains, the better the explanation.
For example, the agent might think that if it rained last night, then there will be drops of water on the living room skylight. The agent then may observe that in fact there are drops of water on the skylight. In this case, the agent can think that it is likely that the grass is wet because it rained last night. This explanation is the more likely one because the assumption that it rained explains two independent observations, compared with the assumption that the sprinkler was on, which explains only one.
Another way to decide between possible explanations is in terms of the consistency of the explanation with observations. In the "grass is wet" example, there are two possible explanations of the grass is wet. It might be that it rained or that the sprinkler was on. Suppose, however, that the agent observes that there are clothes outside on the line and that the clothes are dry. The hypothesis that it rained does not explain why the clothes are dry. In fact, the hypothesis is inconsistent with this observation. This inconsistency eliminates it rained as an explanation.
One way to incorporate the consistency requirement on explanations is with the use of an integrity constraint. Integrity constraints function to rule out inconsistent explanations.
Integrity constraints work like prohibitions. In the runaway trolley example (in the last lecture), the agent reasons forward from the items in the plan to consequences to determine if any of these consequences trigger a prohibition. If they do, the agent abandons the plan because the prohibition makes it impossible both to accept the plan and to not do something wrong. In the "grass is wet" example, the agent reasons forwards from possible explanations to consequences of those explanations. If any of these consequences are inconsistent with what the agent knows, the agent abandons the explanation because the integrity constraint makes it impossible both to accept the explanation and maintain the integrity of his knowledge base.
In the "grass is wet" example, let the integrity constraint be
if a thing is dry and the thing is wet, then false.
Now suppose the beliefs are
the clothes outside are
the clothes outside are wet if it rained.
Suppose the hypothesis is
Forward reasoning yields
the clothes outside are wet
Forward reasoning with this consequence and the constraint yields
if the clothes outside are dry, then false
Finally, more forward reasoning yields
The derivation of false eliminates the hypothesis that it rained as a candidate explanation of the observation that the grass is wet.