Philosophy, Computing, and Artificial Intelligence
PHI 319. Two Famous Experiments in Psychology.
Computational Logic and Human Thinking
Chapter 2 (38-53)
In this lecture, we consider two famous experiments in psychology. These experiments seem to show something about reasoning in human beings. Note, though, that even if these experiments do show something about reasoning in human beings, nothing directly follows about the logic programming/agent model of the intelligence that characterizes a rational agent. The goal in AI is to model intelligence, not necessarily human intelligence. The experiments are interesting nevertheless because human intelligence is the clearest example of intelligence.
The Wason Selection Task
"Reasoning about a rule"," P. C. Wason. The Quarterly Journal of Experimental Psychology, 20:3,1968, 273-281. The "Wason Selection Task" refers to an experiment Peter Wason performed in 1968.
"The subjects [in a pilot study (Wason, 1966)] were presented with the following sentence, 'if there is a vowel on one side of the card, then there is an even number on the other side,' together with four cards each of which had a letter on one side and a number on the other side. On the front of the first card appeared a vowel (P), on the front of the second a consonant (P), on the front of the third an even number (Q), and on the front of the fourth an odd number (Q). The task was to select all those cards, but only those cards, which would have to be turned over in order to discover whether the experimenter was lying in making the conditional sentence. The results of this study, and that of a replication by Hughes (1966), showed the same relative frequencies of cards selected. Nearly all subjects select P, from 60 to 75 per cent. select Q, only a minority select not-Q and hardly any select not-P. Thus two errors are committed: the consequent is fallaciously affirmed and the contrapositive is withheld. This type of task will be called henceforth the 'selection task'" (Wason 1968: 273-274). The subject is asked to make a certain selection from among presented options. This task is posed in terms of cards the subject can see. There are four cards, with letters on one side and numbers on the other. The cards are lying on a table with only one side of each card showing:
The task is to select those and only those cards that must be turned over to determine whether the following statement is true: If there is a d on one side, then there is a 3 on the other side.
Make the selection yourself before you
[hover here] to see the answer.
The experimenters expect the subjects to select
the first and fourth cards. This selection is the response classical logic dictates when the
sentence is interpreted as having the logical form of a material conditional, φ → ψ.
If the back side of the first card shows something other than a "3," the statement is false. If the back side
of the fourth card shows a "d," the statement is false. For the second and third card,
the statement is true no matter what the back side of the card shows.
In fact, very few subjects give the answer the experimenters expect. Most turn over the first. Many turn over the second. Only a few turn over the fourth card.
Most people do far better on tasks that are formally equivalent but have more meaningful content. For example, suppose that the task is to enforce the policy that
You must be at least twenty-one years old to drink in a bar
This task is equivalent to checking cases to determine the truth of the conditional
If a person is drinking alcohol in a bar, then the person is at least twenty-one years old
Given the following description of the cases
card 1: person is drinking (alcholic) beer
card 2: person is a senior citizen
card 3: person is drinking (non-alcoholic) soda
card 4: person is a primary school child
most people know what to do. They know, for example, that given the "person is a primary school child," they must determine whether the "person is drinking alcohol in a bar."
How to Explain the Observed Results
What is the explanation for these asymmetrical experimental results with respect to the task posed in terms of these two conditionals? If thinking is computation, and in particular if the computations in logic programming are a good model of reasoning, then one might think that subjects would be equally good at reasoning with both of these conditionals.
A Hypothesis about Logic from Evolutionary Psychology
The evolutionary psychologist
Leda Cosmides suggests that humans have evolved with a
specialized algorithm to
detect cheaters in social contracts. The idea is that the
ability to reason about the connections between states in the world is not an instance of a single
capacity. Rather, there are domain specific
capacities that develop in human
beings at different ages and result in different
competencies. (Further, it might be that this way of being
intelligent evolved in human beings from earlier, more
primitive domain specific capacities. It might be that
the evolution of human intelligence was a matter of the
ability to cobble together ancestral capacities, for,
say, route planning, to serve novel ends such as abstract
mathematical reasoning. (Many mathematicians appear to rely on
something like "geometrical intuition" to guide their
thinking even about mathematical questions that bear no
evident resemblance to navigational plotting.))
Since cooperation is beneficial for survival from an evolutionary perspective, and since cooperation is stable as long as cheaters are detected and deterred, it might be that one domain for reasoning is what might be called the "domain of social morality." If there were such a domain specific capacity for reasoning about cheaters, maybe this explains why people do better with the "alcohol" conditional than with the "cards with numbers and letters" conditional. Evolution has given humans the intelligence that makes them good at detecting cheaters.
Domain Specific Deduction and Logical Deduction
To understand the import of Cosmides' hypothesis, it is necessary to get straight on what logic is.
A textbook answer is that logic is about logical consequence. In reasoning, deduction is an activity in which an agent draws out implications of a set of premises. He or she reasons from premises to conclusions. Some but not all deductions are logical deductions. In logical deductions, the implications are logical consequences of the premises. In other deductions, the implications are implications but not logical consequences of the premises.
An example may help clarify the distinction between deductions and logical deductions.
In reasoning about an electrical circuit, one might reason from the premise that the switch is open to the conclusion that the lamp is off. This is a deduction, but it is not a logical deduction. The conclusion is a reasonable implication the the premise given how states are typically connected in electrical circuits, but the conclusion is not a logical consequence of the premise.
Logical deductions are deductions according to rules of logic. There is dispute among philosophers about what the rules of logic are, For an interesting paper on the matter, see The simple argument for subclassical logic. but it is not our purpose to consider this dispute here. For the sake of illustration, we take Disjunction-Introduction (∨I) and Disjunction-Elimination (∨E) as examples of rules. These two deduction rules may be stated as follows:
P ------- ∨I P ∨ Q
Q ------- ∨I P ∨ Q
[P]¹ [Q]² . . . . . . P ∨ Q R R ------------------------- ∨E, 1, 2 R
∨I says that a disjunction is a logical consequence of either of its disjuncts.
∨E is a little less straightforward. It says that given deductions of a conclusion from the disjuncts of a disjunction, the conclusion is a consequence of the disjunction. If the deductions of the conclusion are logical deductions, then the conclusion is a logical consequence of the disjunction. If the deductions of the conclusion from the disjuncts are not logical deductions, then the deduction of the conclusion from the disjunction is not a logical deduction.
The Capacity for Logical Deduction
Now it is possible to get a little clearer on the hypothesis from evolutionary psychology. The idea is that human beings naturally develop the capacity to recognize connections between states in certain domains, such as "domain of social morality." The capacity for logical deduction is not domain restricted. It is a capacity to draw implications in any domain. According to the hypothesis from evolutionary psychology, human beings do not naturally develop this capacity. Although it is something most human beings can learn if they put in the effort, it is not something they acquire naturally as they become adults, like, say, their adult teeth.
An Alternative Explanation of the Observed Results
Robert Kowalski (in Computational Logic and Human Thinking) rejects the explanation of the results of the experiment in terms of the hypothesis from evolutionary psychology. Instead, he tries to understand the import of the Wason experiment in terms of the formula
"Natural language understanding = translation into logical form + general purpose reasoning" (Computational Logic and Human Thinking, 217).
This formula is not straightforward to understand, but the idea is that subjects in the experiment make the selections they do not because they have not developed the capacity for logical deduction but because they understand the conditionals according to the framework of the logic programming/agent model. This understanding is not the one the experimenters expect.
We will consider Kowalski's argument in more detail in a subsequent lecture.
The Suppression Task
"Suppressing valid inferences with conditionals," R.M.J. Byrne. Cognition, 31, 1989, 61-83.
The "Suppression Task" refers to an experiment
Ruth Byrne conducted in 1989.
Like the Wason Selection Task, the Suppression Task seems to
show something about reasoning.
In the experiment, the subjects are asked to consider the following two statements:
If she has an essay to write, then
she will study late in the library
She has an essay to write
On the basis of these two statements, most people in the experiment draw the conclusion that
She will study late in the library
However, given the additional information
If the library is open, then she will study late in the library
many people in the experiment, about 40%, "suppress" (or retract) their earlier conclusion. According to classical logic, this "suppression" is a mistake. The conclusion (She will study late in the library) still follows as a logical consequence of the stated premises.
The argument has the following logical form:
1. If P, then Q If she has an essay to write, then she will study late in the library 2. P She has an essay to write 3. R If the library is open, then she will study late in the library ---- ---- 4. Q She will study late in the library
Like Disjunction-Introduction and Disjunction-Elimination, Conditional-Elimination is traditional rule of logic. The conclusion is a logical consequence of premises (1) and (2). The deduction from these premises proceeds by the rule of Conditional-Elimination (→ E):
P If P, then Q P P → Q ------------------ → E ------------ → E Q Q
The addition of the third premise does nothing to change this fact. It is just extra information.
How to Explain the Observed Results
What explains the results observed in the "Suppression Task" experiment?
Robert Kowalski suggests that the subjects do not understand the sentences in the way the experimenters expect and do not reason in the way the experimenters expect.
We will consider Kowalski's solution in more detail later, but part of his idea seems to be that the reasoning in which the subjects engage in the Suppression Task is defeasible reasoning, not conclusive reasoning. The logic programming/agent model (as it is currently understood as a way to answer queries in terms of backward chaining) does not implement defeasible reasoning.
Kowalski suggests (as we will see in a subsequent lecure) that the subjects understand the premises and the conclusion in such a way that the premises are defeasible reasons to believe the conclusion. Further, they understand the new information to defeat this reasoning.
Conclusive and Defeasible Reasoning
Conclusive and defeasible reasons stand in different relations to their conclusions.
Conclusive reasons are reasons to believe because there is a logical deduction from the reason to the belief. In logic programming, backward chaining (as we are currently understanding it) may be understood as an example of conclusive reasoning. If a query is answered positively, there is a logical deduction (a proof) from premises taken from the program to the query.
Defeasible reasons are more difficult to characterize. They are reasons to believe, but not because there is a logical deduction from the reason to the belief. This means that if some premises are defeasible reasons for some conclusion, new information can make it rational to retract the belief in the conclusion while at the same time retaining belief in the premises.
An Undercutting Defeater
Suppose that I form the belief that some object is red because it looks red. The conclusion
"The object is red"
is not a logical consequence of the premise
"The object looks red."
There is no logical deduction from this premise to this conclusion, but in the absence of information to the contrary, the premise makes it reasonable to believe the conclusion.
Suppose, however, I acquire new information. Suppose I come to know that the object is illuminated by a red light and that red lights make white objects look red. This new information does not change the way the object looks. It still looks red to me. The object might even be red, but no longer is it rational for me to believe that it is red because it looks red. The new information undercuts the inference from the premise to the conclusion that the object is red.
An Rebutting Defeater
Suppose that I form the belief that all A's are B's because I have seen many A's and they have all been B's. This conclusion is not a logical consequence of the premise. Still, in the absence of contrary information, the premise makes it reasonable for me to believe that all A's are B's.
Now, though, suppose I see an A that is not a B. This does not change my belief that the A's I saw in the past were B's, but no longer is it rational for me to believe that all A's are B's because in the past all A's I saw were B's. The new information rebuts my belief that all A's are B's.
What we have Accomplished in this Lecture
We looked at two experimental challenges to the logic programming/agent model. Further, we looked at the beginnings of Robert Kowalski's responses to these challenges.