Abduction and Abductive Logic Programming

Thinking as Computation, 11
Computational Logic and Human Thinking, 10, Appendix A6




Abduction is reasoning from effects to causes, e.g., from symptoms to disease. The idea is that given an observation (and a background theory), abduction is reasoning to an explanation. What counts as an explanation? That is a hard question to answer in detail, but the rough idea is that explanations provide information about causes.

Abduction is an instance of defeasible reasoning. It can lead from true observations and other beliefs to false hypotheses (explanations), but it is important for modeling the intelligence of a rational agent. The explanations of observations may trigger maintenance goals that the observations themselves do not trigger.

Suppose that the agent makes the observation that the grass is wet. The problem is to explain the observation. There are many possible explanations, but in this part of the world (Tempe, Arizona) the most likely alternatives are either that it rained or that the sprinkler was on. How does the agent reason to these explanations?

One way to find these explanations is by reasoning backwards from the observation (treated as a goal) with beliefs about causal connections represented in the form

effect if cause

(Note that treating observations as goals extends the notion of goal, beyond representing the world as the agent would like it to be in the future, to explaining the world as the agent actually sees it. What justifies this? The extension exploits the fact that the two kinds of reasoning finding actions to achieve a goal and finding hypotheses to explain an observation can both be viewed as special cases of the reasoning used to solve the more abstract problem of finding assumptions to derive conclusions.)

Suppose that the beliefs about the casual connections are

the grass is wet if it rained.
the grass is wet if the sprinkler was on.

In the KB, the grass is wet contains a "closed" predicate. This predicate has a (partial) definition. Something is wet if it rained or the sprinkler was on. Unlike the grass is wet, It rained and the sprinkler was on contain "open" predicates. They have no definition because they do not occur in the heads of clauses. This fact makes it rained and the sprinkler was on the possible hypotheses for abduction. Backward reasoning from the observation that the grass is wet results in two possible explanations: either it rained or the sprinkler was on.


The problem now is to decide which is the best explanation. In general, this is nontrivial.

One way to help decide between the explanations is to use forward reasoning. Forward reasoning from alternative explanations can sometimes derive additional consequences that can be confirmed by past or future observations. The greater the number of such additional observations a hypothesis explains, the better the explanation.

For example, the agent might think that if it rained last night, then there will be drops of water on the living room skylight. The agent then may observe that in fact there are drops of water on the skylight. In this case, the agent can think that it is likely that the grass is wet because it rained last night. This explanation is the more likely one because the assumption that it rained explains two independent observations, compared with the assumption that the sprinkler was on, which explains only one.

Another way to decide between possible explanations is in terms of the consistency of the explanation with observations. In the "grass is wet" example, there are two possible explanations of the grass is wet. It might be that it rained or that the sprinkler was on. Suppose, however, that the agent observes that there are clothes outside on the line and that the clothes are dry. The hypothesis that it rained does not explain why the clothes are dry. In fact, the hypothesis is inconsistent with this observation. This inconsistency eliminates it rained as an explanation.

One way to incorporate the consistency requirement on explanations is with the use of an integrity constraint. Integrity constraints function to rule out inconsistent explanations.

In the "grass is wet" example, let the integrity constraint be

if a thing is dry and the thing is wet, then false.

Now suppose the beliefs are

the clothes outside are dry.
the clothes outside are wet if it rained.

Suppose the hypothesis is

it rained.

Forward reasoning yields

the clothes outside are wet

Forward reasoning with this consequence and the constraint yields

if the clothes outside are dry, then false

Finally, more forward reasoning yields

false

The derivation of false eliminates the hypothesis that it rained as a candidate explanation of the observation that the grass is wet.



There are several points to notice about this example.

One important point to notice is the similarities in the ways that prohibitions and integrity constraints work. In the logic programming/agent model, the agent reasons forward from the items in the plan to consequences to determine if any of these consequences trigger a prohibition. If they do, the agent abandons the plan because the prohibition makes it impossible both to accept the plan and to not do something wrong. In the "grass is wet" example, the agent reasons forwards from possible explanations to consequences of those explanations. If any of these consequences are inconsistent with what the agent knows, the agent abandons the explanation because the integrity constraint makes it impossible both to accept the explanation and maintain the integrity of his knowledge base.

Another point to notice is that abduction may help with instances of defeasible reasoning that cannot be easily explained in terms of negation-as-failure. Recall the example "looks red." In the example, the subject forms the belief that some object is red because it looks red. It may be that the reasoning in this example is an instance of abduction, although it is not completely clear how this abduction would work in the logic programming/agent model. The beliefs about causal connections might be

it looks red if it is red
it looks red if it is white and has red lights shinning on it

This would lead to two possible explanations. The second possibility is needs to be somehow ruled out before the agent gets new information, but it does not seem that it is ruled out because of a violation of integrity. Instead, it seems that it that it is the less plausible of the two possible explanations in some other way.


Finally, as with every example, it is important to note the ways the algorithm in logic programming/agent model is not completely determinate.






move on go back