Philosophy, Computing, and Artificial Intelligence

PHI 319. Kowalksi's Attempt to Understand the Suppression Task.

Computational Logic and Human Thinking
Chapter 5 (75-91), Appendix A4 (284-289)


The Negation-as-Failure Rule

Like Disjunction-Introduction, Disjunction-Elimination, and Conditional-Elimination, the conclusion the rule Negation-Introduction is traditional thought to be a logical consequence of the premises in the deduction. Negation-as-failure (NAF) is a rule in defeasible reasoning.

To see this, it necessary fist to distinguish NAF from the Negation-Introduction rule (¬I) of classical logic. The classical rule says that if absurdity (⊥) is a consequence of a set of premises and assumption φ, then ¬φ is a consequence of these premises:

    [φ]¹
      .
      .
      .
     ⊥
  ------- ¬I,1
    ¬φ

This is not a rule in defeasible reasoning. If absurdity (⊥) is a consequence of a set of premises and assumption φ, it will remain a consequence even if add more assumptions.

This shows one thing NAF is not. It is not the classical Negation-Introduction rule.

Negation in Ordinary Reasoning

To understand what NAF is, consider two ordinary instances of reasoning.

In everyday life, we often reason in the following way. I look at a schedule of flights from Phoenix to San Francisco. I don't see one listed as leaving at 10:00 am. So I naturally conclude that it is not the case that a flight leaves at that time. This inference seems entirely reasonable, but it is not sanctioned by the classical Negation-Introduction rule.

Why is this reasoning rational? Kowalksi talks about what "justifies" this reasoning.

"The derivation of negative conclusions from the lack of positive information about a predicate is justified by a belief or assumption that we have all the positive information that there is to be had about the predicate" (Kowalksi, Computational Logic and Human Thinking, 78).

"The use of negation as failure to derive a negative conclusion is justified by the closed world assumption that you have complete knowledge about all the conditions under which the positive conclusion holds" (Kowalksi, Computational Logic and Human Thinking, 79).

The answer might be that it presupposes the closed world assumption.

If we think we know the complete list of flights from Phoenix to San Francisco, it seems reasonable to conclude that no flight leaves at a certain time if on the basis of the list we are unable to prove that there is a flight that leaves at that time.

In this case, though, the reasoning in the "fight" example looks to be an instance of conclusive reasoning, not defeasible reasoning. It seems that we would have to give up our belief that we had a complete list of flights if we were to discover that there is a flight that leaves for San Francisco at 10:00 am. In this case, the reasoning is not defeasible. It is conclusive.

"[T]he kind of reasoning involved in the suppression task, once its intended logical form has been identified, is a form of default (or defeasible) reasoning, in which the conclusion of a rule is deemed to hold by default, but is subsequently withdrawn (or suppressed) when additional information contradicting the application of the rule is given later" (Kowalksi, Computational Logic and Human Thinking, 48).


Some of what Kowalksi says is confusing.

"This property of negation as failure and the closed world assumption is called defeasibility or non-monotonicity. It is a form of default reasoning, in which an agent jumps to a conclusion, but then withdraws the conclusion given new information that leads to the contrary of the conclusion" (Kowalksi, Computational Logic and Human Thinking, 81).

It is unclear how the closed world assumption has the property of defeasibility. This is a property of reasoning.

Further, Kowalksi's characterization of defeasible reasoning in terms of the agent withdrawing "the conclusion given new information that leads to the contrary of the conclusion" seems too narrow. It does not allow for the possibility of undercutting defeaters.
Another example of ordinary reasoning is the kind of reasoning Kowalski calls "default reasoning, in which an agent jumps to a conclusion, but then withdraws the conclusion given new information that leads to the contrary of the conclusion" (Computational Logic and Human Thinking, 81). This reasoning does not depend on the closed world assumption.

To understand "default" reasoning, consider a slight variation on Kowalski's "innocent unless proven guilty" example (Computational Logic and Human Thinking, 84).

A person is innocent of a crime
if the person is accused of the crime and it is not the case that the person committed the crime
.

A person committed a crime if another person witnessed the person commit the crime.

Bob is accused of the crime of robbing a bank

Suppose we think of this example along the lines of a logic program. Suppose the query is

Bob is innocent of the crime of robbing a bank.

This query will succeed if these queries

the person (Bob) is accused of the crime

it is not the case that the person (Bob) committed the crime

succeed. If this second query is understood to succeed just in case

the person (Bob) committed the crime

fails, then this second query will succeed. If, however, we change the example to include the belief that someone witnessed Bob commit the crime, the query will fail.

Now, it seems, we know what NAF (Negation-as-Failure) is. It is the kind of defeasible reasoning in Kowalski's "innocent unless proven guilty" example.

Negation-as-Failure in Logic Programming

Logic programming may be developed so that it allows for NAF.

Logic programs, as we have defined them, do not contain "nots" in the tails of rules. So the first step is to allow "nots" to appear in the tails of rules. We allow a rule to have this form:

positive condition if positive conditions and not-condition

In addition, we have to modify the evaluation step to handle these "not" conditions.

Consider the following program:

P if Q, not-R
R if S, T
Q
S

Suppose the query is

?-P

This query unifies with the head of the first rule. So the derived query becomes

Q, not-R

Q unifies with the head of the first fact. Since it (like all facts) has no body, the derived query is

not-R

In logic programming with NAF, this query succeeds just in case the following query fails

R

This query unifies with the head of the second rule to produce the query

S, T

S unifies with the head of the second fact. Since it has no body, the new query is

T

This query does not unify with any head in the program. So the query

R

fails. Hence, in logic programming with NAF, the query

not-R

succeeds. And so the original query

?- P

succeeds. P is a consequence (but not a logical consequence) of the program.

The Suppression Task Revisited

Kowalski suggests, reasonably enough, that in everyday conversation it is common to state only the most important conditions of a general statement and to leave implicit the other conditions that apply. So, for example, in the example in the Suppression Task, the general statement is

If she has an essay to write, she will study late in the library.

If, according to Kowalski (Computational Logic and Human Thinking, 86), the underlying rule were made more explicit, it would look something like this:

If she has an essay to write, and it is not the case that she is prevented from studying late in the library,
then she will study late in the library.

To set out this more formally, let the sentences be symbolized as follows:

E = She has an essay to write
L = She will study late in the library
P = She is prevented from studying late in the library

The corresponding logic program or KB is

L if E, not-P
E

Relative to this KB, the query

?- L

succeeds. This is the outcome the experimenters expect. Here is the computation. L unifies with the head of the rule. The new query list is

E, not-P.

E unifies with the head of the fact. Facts have no tails. So now the query is

not-P.

This 'not' is negation-as-failure. This means that this query succeeds if the query

P

fails. This query does fail, so not-P succeeds. At this point, then, because the query list is empty, it follows L is a consequence (but not a logical consequence) of the program.

Explaining the Experimental Results

Given NAF and this new KB, it may seem that the logic programming/agent model provides a way to explain why subjects draw the conclusion

She will study late in the library

on the basis of the premises

If she has an essay to write, then she will study late in the library
She has an essay to write

Only the most important conditions explicitly appear in the conditional, but the subjects incorporate the conditional into their minds in a way that uses NAF but that does not prevent the conclusion from being a consequence of the premises in the KB.

Since about 40% of the subjects "suppress" (or retract) the conclusion

She will study late in the library

upon receiving the new information

If the library is open, she will study late in the library.

there must be a natural way to incorporate the new information into the model so that

?- L.

fails. Otherwise, Kowalski has not explained the results in the Suppression Task.

One Way to Incorporate the New Information

The new information makes explicit a condition that was implicit in the original information:

She is prevented from studying late in the library if the library is not open.

On one way to incorporate this new information changes the KB so that it is

L if E, not-P
E
P if not-O

Relative to this KB, the query

?- L

fails. Here is the computation. L unifies with the head of the first rule. The new query list is

?- E, not-P.

E unifies with the head of the first fact. It has no tail. So now the query is

?- not-P.

This 'not' is negation-as-failure. So the query succeeds if the query

?- P

fails. P unifies with the head of the second rule. This produces the derived query

?- not-O.

This 'not' is negation-as-failure. So query succeeds if the query

?-O

fails. This query fails. O does not unify with any head in the program. So

?- not-O.

succeeds. This means that

?- not-P.

fails. Hence L is not a consequence of the KB.

There Appears to be a Problem with this Explanation

Now, though, we must ask ourselves a question

Does incorporating the new information in the KB in the way we incorporated it explain why subjects suppression (or retract) the conclusion L?

The answer, it seems, is that it does not.

There are lots of possible conditions that would present one from studying in the library. Here the possibilities that Kowalski gives (Computational Logic and Human Thinking, 86-87):

She is prevented from studying late in the library if the library is not open.
She is prevented from studying late in the library if she is unwell.
She is prevented from studying late in the library if she has a more important meeting.
She is prevented from studying late in the library if she has been distracted.

In this list, is not obvious that all of the preventing conditions are negative. So it is unclear that given the new information, the conclusion (She will study late in the library) is not a consequence of the KB. If so, it is unclear that we have an explanation for why subjects in the experiment "suppress" their original conclusion when they receive new information.

Two Other Ways to Incorporate the Information

There are two other ways to add the new information to the KB.

The first incorporates the new information so that the KB becomes

E = She has an essay to write
L = She will study late in the library
O = The library is open

L if E, O
E

The second incorporates the new information so that the KB becomes

E = She has an essay to write
L = She will study late in the library
P = She is prevented from studying late in the library
C = The library is closed

L if E, not-P
E
P if C
C

L is not a consequence of either logic program.

Which of these two possibilities is the more plausible?

This second possibility can seem to be the more plausible of the two insofar as the new information is added without changing an existing entry in the KB.

The problem, though, is that one of the additions to the KB in the second possibility is the belief that the library is closed. There appears to be no justification for this addition.

Which possibility does Kowalski intend?

His discussion (Computational Logic and Human Thinking, 87) suggests that he thinks that the first possibility is correct. He says that the "higher-level representation"

she will study late in the library if she has an essay to write and it is not the case that she is prevented from studying late in the library.
she is prevented from studying late in the library if the library is not open.
she is prevented from studying late in the library if she is unwell.
she is prevented from studying late in the library if she has a more important meeting.
she is prevented from studying late in the library if she has been distracted.

"The relationship between the two formulations [in the Suppression Task] is another example of the relationship between a higher-level and lower-level representation, which is a recurrent theme in this book. In this case, the higher-level rule acts as a simple first approximation to the more complicated rule. In most cases, when a concept is under development, the complicated rule doesn’t even exist, and the higher-level representation as a rule and exceptions makes it easier to develop the more complex representation by successive approximation" (Robert Kowalski, Computational Logic and Human Thinking, 87). is "compiled into" the following "lower-level representation":

she will study late in the library
if she has an essay to write
and the library is open
and she is not unwell
and she doesn’t have a more important meeting
and she hasn’t been distracted.

Kowalski's view, it seems, is that this "compiling" happens in human beings as part of the development of the concept "she will study late in the library."

Problems for the Logic Programming/Agent Model

The addition of NAF (negation-as-failure) improves the logic programming/agent model, but it also highlights certain problems that need solutions if the model is to be adequate.

1. "[T]he human designer, after studying the world, uses the language of a particular logical system to give to our agent an initial set of beliefs `Delta_0` about what this world is like. In doing so, the designer works with a formal model of this world, W, and ensures that W ⊨ `Delta_0`. Following tradition, we refer to `Delta_0` as the agent’s (starting) knowledge base. (This terminology, given that we are talking about the agent’s beliefs, is known to be peculiar, but it persists.) Next, the agent ADJUSTS its knowledge base to produce a new one, `Delta_1`. We say that adjustment is carried out by way of an operation `ccA`; so A[`Delta_0`]= `Delta_1`. How does the adjustment process, `ccA`, work? ... [The adjustment] can come by way of any mode of reasoning.... The cycle continues when the agent ACTS on the environment, in an attempt to secure its goals. Acting, of course, can cause changes to the environment. At this point, the agent SENSES the environment, and this new information `Gamma_1` factors into the process of adjustment, so that `ccA[Delta_1 ∪ Gamma_1] = Delta_2`. The cycle of SENSESADJUSTSACTS continues to produce the life `Delta_0`, `Delta_1`, `Delta_2`,`Delta_3`…, … of our agent" (Stanford Encyclopedia of Philosophy, Artificial Intelligence). Suppose an agent notices that "not-φ" is a consequence of what he believes. He is permitted to add it to his beliefs, but logic programming with NAF does not allow for this possibility.

2. Suppose an agent reasons that something is red because it looks red and is not a white object with a red light shinning on it.

R = The object is red.
L = The object looks red.
W = The object is white and has a red light shinning on it.

R if L, not-W.
L.

Since R is a consequence of what the agent believes, he is permitted to add R to his beliefs. Suppose he does add it. Suppose that subsequently he comes to believe that W is true. Now he should retract his belief in R. As we have set out the logic programming with NAF/agent model, there is no mechanism for adjusting the KB when defeaters are added to it.

3. Suppose an agent notices that a proposition is a consequence of what he believes. He is permitted to add the consequence to his existing beliefs, but he is not required to add it. He might instead decide to give up one of his existing beliefs. Yet, as we have set it out, in neither the original nor the model with NAF is there a mechanism for making this decision.

Negation-As-Failure in Prolog

Here is an example prolog program (based on the movie Pulp Fiction) with NAF (negation-as-failure). Suppose that Vincent enjoys every kind of burger except a Big Kahuna Burger.

enjoys(vincent,X) :- burger(X), \+ big_kahuna_burger(X).
burger(X) :- big_mac(X).
burger(X) :- big_kahuna_burger(X).
burger(X) :- whopper(X).
big_mac(a).
big_mac(c).
big_kahuna_burger(b).
whopper(d).

Suppose we ask whether there is something Vincent enjoys:

The query unifies with the head of the first rule to produce the derived query list

burger(X), \+ big_kahuna_burger(X)

The first conjunct in this query unifies with the head of first rule in the definition. Now the derived query is

big_mac(X), \+ big_kahuna_burger(X)

The first conjunct of the derived query unifies with the first fact in the program. Now the derived query is

\+ big_kahuna_burger(a)

This succeeds if

big_kahuna_burger(a)

fails. This query fails. It unifies with the head of nothing in the program.

What we have Accomplished in this Lecture

We distinguished the Negation-as-Failure rule (NAF) from the Negation-Introduction (¬I) rule of classical logic. We incorporated NAF into logic programming. We looked at Kowalski's use of NAF in his solution to the Suppression Task. We considered an example of NAF in Prolog.




move on g back