How to Form Beliefs in Defeasible Reasoning

In an environment of real-world complexity, intelligent agents must be able to form reasonable beliefs and make rational decisions against a background of pervasive ignorance. Given this situation, reasoning cannot be confined to deductively valid inferences. In an environment of real-world complexity, it is necessary for intelligent agents to reason defeasibly. The need to draw conclusions that are made reasonable by the evidence and be prepared to withdraw those conclusions and to draw new conclusions in the face of new evidence.

In deductive reasoning, the inference schemes employed are deductive rules. What distinguishes deductive reasoning from reasoning more generally is that deductive reasoning is not defeasible. More precisely, given a deductive argument for a conclusion, one cannot rationally deny the conclusion without denying one or more of the premises.

Information that can mandate the retraction of the conclusion of a defeasible argument constitutes a defeater for the argument.

There are two kinds of defeaters. The simplest kind is a rebutting defeater, which attack an argument by attacking its conclusion. The other kind of defeater is an undercutting defeater. Rebutting defeaters attack the conclusion of a defeasible inference. Undercutting defeaters attack the inference without providing a reason for thinking it has a false conclusion.

An Example of Defeasible Reasoning

An example of defeasible reasoning helps to illustrate the idea. Here is one that John Pollock (who did some the fundamental work on the subject) has given in his work on defeasible reasoning.

In the first argument, the agent observes a number of swans that are white. This gives him a defeasible reason for thinking that all swans are white.

In the second argument, an ornithologist (Herbert) informs the agent that not all swans are white. People do not always speak truly, so the fact that he says that not all swans are white does not entail that it is true that not all swans are white. Nevertheless, because Herbert is an ornithologist, the information gives the agent a defeasible reason for thinking that not all swans are white. This reason is a rebutting defeater. It is indicated by the heavy line. The line has double arrows because rebutting defeaters are symmetrical.

In the third argument, Simon (whom the agent regards as very reliable) says that Herbert is incompetent. This gives the agent a reason that defeats the argument for the conclusion that all swans are not white, but it is not a reason for thinking that it false that not all swans are white. Even if Herbert is incompetent, he might have accidentally gotten it right that not all swans are white. Thus Simon’s remarks constitute a defeater, but not a rebutting defeater. They constitute an undercutting defeater.

The Problem of What to Believe

Given the inference graph, what should the agent believe?

Suppose that the agent reasoned only deductively, and so only in terms of conclusive reasons. If the premises are somehow initially justified, then it is easy to provide an account of what the agent should believe: he should believe all and only the conclusions of his arguments. The problem arises because the agent reasons defeasibly. In this case, he should not believe the conclusions of all of the arguments because the conclusions of some of his arguments may be defeaters for other arguments.

(Notice that defeasible reasoning enforces a distinction between beliefs and conclusions. When an agent constructs an argument, he entertains the conclusion and he entertains the propositions comprising the intervening steps, but he need not believe them. Constructing arguments is one thing. Deciding which conclusions to accept is another.)

What is needed is a criterion which, when applied to the inference graph, determines which conclusions are defeated and which are not. This would be a criterion that determines the defeat-statuses of the conclusions. The conclusions that ought to be believed are those that are undefeated. Such a criterion is called a semantics for defeasible reasoning.

Argumentation Semantics

To begin to understand what a semantics for defeasible reasoning is, it is helpful to abstract from the internal structure of arguments.

What remains in the argument when the internal structure is abstracted away is an argumentation framework. An argumentation framework is a set of (abstract) arguments and a binary defeat relation between these arguments. More formally, it is a pair <Arg, Def>, where Arg is a set of arguments and Def is a subset of Arg x Arg.

Argument A defeats argument B just in case <A, B> is in the set Def.

An example makes it a little clearer how an argumentation semantics works.

Suppose there are three arguments, A, B, and C. Suppose that C defeats B and that B defeats A. These relationships among the arguments may be pictured as follows:

     A <----  B, B <------ C

What should be believed? Given the defeat relationships, it is reasonable to have confidence in arguments A and C but not in argument B. (Here is the explanation. B defeats A, but C defeats B. In this way, C reinstates A. Further, no argument defeats C. So arguments A and B are undefeated.)

A procedure that takes an argumentation framework and determines which of the arguments can and cannot be accepted is an argumentation semantics.

Labeling Arguments In, Out, or Undecided

One way to provide a semantics is in terms of labeling arguments in, out, or undecided. Here are the conditions:

An argument is in iff all its defeaters are out.

An argument is out iff all it has at least one defeater that is in.

An argument is undecided iff it is neither in nor out.

The Example Revisited

Consider the example pictured above. In this example,

     Arg = { A,B,C}, Def = {<B, A>, <C,B>}

     A <----  B,  B <------ C

C is in because all its defeaters are out. (It is trivially true that all of C's defeaters are out because C does not have any defeaters.)

C defeats B, and C is in. Therefore, B is out.

Finally, A is in because all of its defeaters are out.

The Nixon Diamond

Here is another example, the so-called "Nixon Diamond." There are two arguments:

Argument A: Nixon is a pacifist because be is a Quaker.

Argument B: Nixon is not a pacifist because he is a Republican.

(Richard Nixon was the 37th President. As a boy, he went to Quaker meetings and played the piano at services. In politics, he was a hawk on the war in Vietnam. He won reelection running against Democratic Senator George McGovern of South Dakota, an outspoken opponent of the war.)

This example yields an argumentation framework <Arg, Def> where Arg = {A, B} and Def = {<A, B>, <B, A>}. This is called the Nixon "Diamond" because it was originally depicted this way

Nixon Diamond

Given the description of the arguments A and B, it may be depicted this way

      A <---- B, B <----- A

Given the framework, there are three possible complete labelings:

1. A = in, B = out

2. A = out, B = in

3. A = undecided, B = undecided

Which labeling is correct? It seems to be the third. It is not rational to conclude either that Nixon is a pacifist or that he is not a pacifist.

Another Example

Here is another example: Arg = {A, B, C}, Def = {<B,A>, <C, B>, <A, C>}. It may be depicted this way

     A <----- B, B <------ C, C <----- A

In this framework, A = undecided, B= undecided, and C = undecided.

Complete Extensions

The another (slightly more formal) approach to argumentation semantics is in terms of what are called complete extensions.

In this approach,

A set of arguments is conflict-free iff it does not contain any arguments A and B such that A defeats B.

A set of arguments defends an argument C iff each defeater of C is defeated by an argument in the set.


F: 2Arg → 2Arg such that F(Args) = {A | Args defends A}, where ArgsArg.

Given these definitions,

Args is a complete extension iff Args = F(Args).

To understand what a complete extension is, it helps to consider the examples. First, consider the framework

     Arg = {A,B,C}, Def = {<B, A>, <C, B>}

     A <----  B. B <------ C

In this framework, there is just one complete extension: {A,C}. It is a complete extension since it is conflict-free and defends exactly itself. Next, consider the Nixon Diamond:

      Arg = {A,B}, Def = {<A,B>, <B,A>} 

      A <---- B,  B <----- A

In this framework, there is just one complete extension: { }. Finally, consider the remaining example framework:

     Arg = {A, B, C}, Def = {<B,A>, <C,B>, <A,C>} 
     A <----- B, B <------ C, C <----- A

In this framework, there are three complete extensions: {A}, {B}, { }.

When there is just one complete extension, it is the set of in arguments. When there is more than one complete extension, the minimal complete extension is the set of in arguments. The minimal complete extension is called the grounded extension. There is always exactly one grounded semantics in an argumentation framework.

Statistical Syllogism

Suppose that a person reads something in the newspaper. What makes it reasonable to belief that the report is true? Not every such report is true, but reports published in certain news sources are likely to be true. This fact, together with the assumption that the particular report is published in such a source, makes it reasonable to belief that the report is true.

"[O]n what basis do I believe what I read in the newspaper? Certainly not that everything printed in the newspaper is true. No one believes that. But I do believe that it is probable that what is printed in the newspaper is true, and that justifies me in believing individual newspaper reports" (Thinking about Acting: Logical Foundations for Rational Decision Making (Oxford University Press, 2006), 109).

The inference here is defeasible. It is what Pollock calls Statistical Syllogism. Initially, it may be stated as follows

     the probability that a thing is a B given that it is an A  >  1/2       c is A 
                                        c is a B

One question about Statistical Syllogism is about the probability. It seems natural to think that the higher the probability, the stronger the reason. Further, it seems natural to think that the circumstances matter. That is to say, it seems natural to think that the probability could be greater than 1/2 but still not high enough for the agent to believe the conclusion on the basis of the premises.

Degree of Justification and Practical Importance

Pollock uses the story of the vacationing captain to show that "[t]he practical importance of a question (i.e., our degree of interest in it) determines how justified we must be in an answer before we can rest content with that answer" (Cognitive Carpentry. A Blueprint for How to Build a Person (The MIT Press, 1995), 48).

In the first part of the story, the captain is on a cruise vacation. He is a passenger and has no other role. He wonders how many lifeboats are on the ship. Pollock says that "[t]o answer this question, [the captain] might simply consult the descriptive brochure passed out to all the passengers." In the second part of the story, there is an accident. The ship is in danger of sinking. The officers of the ship, in including its captain, are incapacitated. The vacationing captain becomes aware of the situation and assumes command. At this point, he cannot simply consult the brochure to learn the number of lifeboats on board. Pollock says that "it becomes very important for him to know whether there are enough lifeboats" onboard and that the captain "must either count them himself or have them counted by someone he regards as reliable" because now "[t]he importance of the question makes it incumbent on him to have a very good reason for his believed answer."

Pollock does not supply much detail, but the idea seems to be that because the number of lifeboats onboard the ship is not relevant to the decisions the captain will make as a passenger on a cruise vacation, its degree of interest is low for him before the accident. The sorts of things he will decide are where to eat, which shows to attend, and so on. In making these decisions, the number of lifeboats on board the ship does not matter one way or another. Its degree of interest is low enough that the degree of justification required to "rest content" with an answer is minimal. Given that he is content with the answer, he can frame his decision problems in a certain way.He can treat the number of lifeboats, whatever the brochure says it is, as part of the way the world is. After the accident, this is no longer true. Now the number of lifeboats on board matters much more to him. According to Pollock, unless the captain can increase his degree of justification by counting the lifeboats or by having someone reliable count them for him, he will have to treat the proposition as merely probable. In this case, when he is deciding whether to give the order to abandon ship to await rescue in the lifeboats, he will not be able treat the number of lifeboats on the ship as part of the way the world is. To make his decision, the captain will have to take into account his uncertainty about the number of lifeboats.

A More Precise Statement of Statistical Syllogism

Given the degree of justification is sensitive to the practical importance of the matter, it is possible to state Statistical Syllogism more precisely. How high the probability must be is sensitive to circumstances. It must always be greater than 1/2, but just how much greater depends on the practical importance of the matter.

1. prob (a thing is a B given that it is an A) is high enough
2. c is A
3. c is a B

So, for example, in the newspaper case, whether it is reasonable to believe the report depends on how reliable newspapers are and on how much is a stake.

Decision Problems, Belief, and Knowledge

This suggests that agents form beliefs in particular circumstances, that the degree of justification necessary for forming a belief in a circumstance is relative to the degree of interest in the proposition to the agent in the circumstance, and that it is rational for agents to use their beliefs defeasibly to frame their decision problems.

Consider again Pollock’s ship captain example. Suppose that when the captain first boards the ship, he consults the safety brochure and thereby comes to believe that there are n lifeboats on the ship. After the accident, when he has assumed command, it would be natural for him to give the following order to one of the crew. "I think there are n lifeboats onboard, but I need to be sure. Go count them and report back to me." This suggests that Pollock’s example is consistent with the possibility that after the accident the captain believes what he believed previously: that the number is what the brochure says it is. He retains this belief, but he thinks that he has insufficient justification to use this belief to frame his decision problem.

In the ship captain example, after the accident, Pollock suggests that the captain believes but lacks knowledge. He says that "it becomes very important for [the captain] to know whether there are enough lifeboats" and that "[t]he importance of the question makes it incumbent on him to have a very good reason for his believed answer [if he intends to rely on it]."

move on go back