The Psychology of Logic. Two Experimental Challenges to the Logic Programming/Agent Model of Intelligence

Computational Logic and Human Thinking, 2


The First Challenge: the Wason Selection Task

The Wason Selection Task refers to an experiment in psychology that Peter Wason performed in 1968.

In the experiment, the subject is given a "task" to perform. This task is to answer a question. The question is posed in terms of cards the subject can see. There are four cards, with letters on one side and numbers on the other.The cards are lying on a table with only one side of each card showing:



The subject is asked to select those and only those cards that must be turned over to determine whether the following statement is true:

If there is a d on one side, then there is a 3 on the other side


*** MAKE THE SELECTION YOURSELF BEFORE YOU LOOK AT THE ANSWER THE EXPERIMENTERS EXPECTED*** [Answer] The answer the experimenters expected is that the first and fourth are the cards that must be turned over to determine whether the statement is true. This is the answer classical logic dictates when the sentence is interpreted as a material conditional, φ → ψ. (If the other side of the first card shows something other than a "3," the statement is false. If the other side of the fourth card shows a "d," then the statement is false. For the second and third card, the statement is true no matter what the other side of the card shows.)


Most people do far better on tasks that are formally equivalent but have more meaningful content. For example, suppose that the task is to enforce the rule that

You must be at least twenty-one years old to drink in a bar

This task is equivalent to checking cases to determine the truth of the conditional

If a person is drinking alcohol in a bar, then the person is at least twenty-one years old

Given the following description of the cases to consider

A, a person is drinking (alcholic) beer
B, the person is a senior citizen
C, a person is drinking (non-alcoholic) soda
D, the person is a primary school child

most people know to check both A and D.


How to Explain the Observed Results of the Wason Selection Task

What is the explanation for these asymmetrical experimental results with respect to the task posed in terms of these two conditionals? If thinking is computation, and in particular if the computations in logic programming are a good model of reasoning, then one might think that subjects would be equally good at reasoning with these conditionals.


Evolutionary Psychology

The evolutionary psychologist Leda Cosmides suggests that humans have evolved with a specialized algorithm to detect cheaters in social contracts. The idea is that the ability to reason about rules is not an instance of a single capacity to reason. Rather, there are domain specific capacities to reason. These capacities develop in human beings at different ages and result in different competencies. Further, it might be that this way of being intelligent evolved in human beings from earlier, more primitive domain specific reasoning schemes. It might be that the evolution of human intelligence was a matter of the ability to cobble together ancestral reasoning schemes, for, say, route planning, to serve novel ends such as abstract mathematical reasoning (many mathematicians appear to rely on something like "geometrical intuition" to guide their thinking even about mathematical questions that bear no evident resemblance to navigational plotting).

This domain specific understanding of intelligence allows for an intriguing explanation of the different experimental outcomes in reasoning about rules.

According to the explanation Cosmides has suggested, one very important domain for reasoning is what might be called the "domain of social morality." Cooperation is beneficial for human beings. Further, cooperation is stable as long as cheaters are detected and deterred. If there were such a domain for reasoning, this would explain why people do better with the "alcohol" conditional than with the "card" conditional. Evolution has given humans the intelligence that makes them good at detecting cheaters.


A Challenge to the Evolutionary Explanation

Robert Kowalksi (the author of CLHT) rejects this explanation of the results of the experiment.

Kowalski tries to understand the import of the Wason experiment in terms of the formula

Natural language understanding = translation into logical form + general purpose reasoning."

Kowalski's explanation is not straightforward to understand, but the rough idea is that reason is a single capacity and that subjects understand the conditionals in the experiment according to the logic programming/agent model framework. We will consider Kowalski's challenge in more detail later in the course



The Second Challenge: The Suppression Task

The Suppression Task is an experiment Ruth Byrne conducted in 1989. Like the Wason Selection Task, the Suppression Task seems to show something about reasoning.

In the experiment, the subjects are asked to consider the following two statements:

If she has an essay to write, then she will study late in the library
She has an essay to write

On the basis of these two statements, most people in the experiment draw the conclusion conclude that

She will study late in the library

However, given the additional information

If the library is open, then she will study late in the library

many people in the experiment, about 40%, "suppress" (or retract) their earlier conclusion.

According to classical logic, this is a mistake. The conclusion still follows. The argument with the additional information has the following form:

  If P, then Q
  P
  R
=====
  Q

There is a proof of Q on the basis of the first two premises. The addition of the third premise does nothing to change this fact.


How to Explain the Observed Result in the Experiment

What explains the results observed in the experiment? One possibility is that human beings have no built in single capacity for reasoning. Reasoning in certain ways can be learned, but it takes time and training. The reasoning in the Suppression Task is an example. It takes special training in logic to reason in this way.


Kowalski's Explanation

Robert Kowalski (the author of CLHT) suggests that the subjects do not understand the sentences in the way the experimenters expect and do not reason in the way the experimenters expect.

We will consider Kowalski's solution in more detail later, but part of his idea is that the reasoning in the Suppression Task is defeasible reasoning, not conclusive reasoning. The logic programming/agent model, as it is currently understood as a way to answer queries in terms of backward chaining, is not able to model defeasible reasoning. Kowalski, in his response to the Suppression Task, suggests a way to supplement the logic programming/agent model so that it is able to model both kinds of reasoning.


Conclusive and Defeasible Reasoning

Conclusive reasons stand in the relation of logical consequence to their conclusions. In logic programming, backward chaining (as we are currently understanding it) may be understood as an example of conclusive reasoning. If an agent wishes to reject the truth of a successful query, he must reject one of the propositions in his knowledge base. The reason is that, in backward chaining, the computation is initiated by posing a query that succeeds only if the query is true in every model that makes the entries in the KB true.

Unlike in the case of conclusive reasons, new information may defeat defeasible reasons. There are two kinds of defeat. A defeasible reason may be undercut or rebutted.

Suppose that I form the belief that some object is red because it looks red. The conclusion "The object is red" is not a logical consequence of the premise "The object looks red," but in the absence of information to the contrary, it is reasonable to believe the conclusion. That is to say, the premise is a defeasible reason to believe the conclusion.

One way the conclusion that the object is red can be defeated by being undercut. This would happen if I were to come to know that the object is illuminated by a red light and that red lights make white objects look red. The object would still look red, and might even be red unbeknownst to me, but it is no longer rational for me to believe that it is red because it looks red.

Suppose that I form the belief that all A's are B's because I have seen many A's and have seen that they are all B's. As before, the conclusion is not a logical consequence of the premise. Still, in the absence of contrary information, it is reasonable to believe the conclusion on the basis of the premise. The premises is a defeasible reason to believe the conclusion.

One way the conclusion that all A's are B's can be defeated by is being rebutted. This form of defeat attacks the conclusion, not the inference. This would happen if I were to see an A that is not a B. It would still be rational for me to believe that the many A's I saw previously are all B's, but it would no longer be rational for me to believe that all A's are B's.


Defeasible reasoning appears to be very common in ordinary, everyday life. So for the logic programming/agent model framework to be of much use in modeling intelligence, it seems that it must be able to model this kind of reasoning. It is not capable of modeling defeasible reasoning as we are currently understanding the framework. This is a problem that will need to be solved.





move on g back