Philosophy, Computing, and Artificial Intelligence

PHI 319

Computational Logic and Human Thinking
Chapter 16 (217-231)


How to Explain the Results in the Wason Selection

Kowalski argues that the hypothesis that humans do not naturally develop a capacity for logical deduction is not the only way to account for the results of the Wason Selection Task.

In the Wason Selection Task, the experimenters give the subjects a "task" to perform. The task is to make a certain selection among presented options. The experimenters pose this task in terms of cards the subject can see. There are four cards. The cards have letters on one side and numbers on the other. They are lying on a table so that one side of each card showing:

The subject is asked to select those and only those cards that must be turned over to determine whether the following statement is true:

(*) if a card X has letter d on the letter side, then the card X has number 3 on the number side


The Expected Results

The experimenters expect the subjects to check two cards:

modus ponens:
from observation φ ("d"), check for ψ ("3") on the other side.
modus tollens:
from observation ¬ψ ("7), check for ¬φ ("not d") on the other side

The Observed Results

All most all subjects perform modus ponens and fail to perform modus tollens. In addition, many subjects perform the fallacy from a logical point of view of affirming the consequent:

affirm the consequent:
from observation ψ ("3"), check for φ ("d") on the other side

Kowalski's Explanation of the Observed Results

Kowalski stress that to understand the observed results, it is necessary to appreciate that subjects interpret the words they hear in ways the experimenters do not expect. He summarizes this idea in the following equation: "natural language understanding = translation into logical form + general purpose reasoning" (Computational Logic and Human Thinking, 217).

The experimenters assume that (*) is a material conditional (φ → ψ), but Kowalski suggests that not all of the subjects understand the sentence in this way.

How, then, do they understand the sentence?

"In our agent model, the agent’s response depends upon whether the agent interprets the conditional as a goal or as a belief" (Computational Logic and Human Thinking, 218).

The conditional as a belief: why subjects perform modus ponens

"In Computational Logic, conditional beliefs are used to reason both backwards and forwards. In particular, given a (passive) observation of a positive predicate P, forward reasoning with the conditional if P then Q derives the positive conclusion Q. This is a classically correct application of modus ponens.... If the conclusion Q is observable, and there is a reason to check Q, because there is some doubt whether the conditional is actually true, then the agent can actively observe whether Q is true" (Computational Logic and Human Thinking, 221). On the logic programming/agent model, agents use their beliefs in forward reasoning from observations. This explains why so many subjects perform modus ponens (= select the first card). If they incorporate the conditional as a belief, they naturally fall into forward reasoning when they observe the antecedent of this conditional.

The conditional as a belief: why subjects affirm the consequent


"The challenge is to explain why most people reason correctly in some cases, and seemingly incorrectly in other cases. Part of the problem, of course, is that the psychological tests assume that subjects have a clear concept of deductive inference. But we have seen that even Sherlock Holmes had trouble distinguishing deduction from abduction. ... This explains why most subjects commit the deductive fallacy of affirmation of the consequent, which is not a fallacy at all, when these considerations are taken into account" (Computational Logic and Human Thinking, 218).

"In Computational Logic, conditionals are also used to explain observations. Given an observation of Q, backward reasoning derives P as a candidate explanation of Q. This derivation can be viewed ... as abduction with the conditional if P then Q .... In classical logic, this form of reasoning is called the fallacy of affirmation of the consequent" (Computational Logic and Human Thinking, 221). .
On the logic programming/agent model, agents also use their beliefs in abductive reasoning to explain their observations. This explains why it is natural for subjects to affirm the consequent (= select the second card). If they incorporate the conditional as a belief, they naturally fall into abductive reasoning when they observe the consequent of this conditional.

It might seem, then, given this explanation, that subjects should perform modus ponens and affirm the consequent equally. This, however, is not what happens in the experiment.

The reason, maybe, is that abductive reasoning is more difficult than forward reasoning.

The conditional as a belief:why subjects fail to perform modus tollens

With respect to whether to select the fourth card, what the subjects observe is

the fourth card has the number 7 on the number side.

From this observation, in order to perform modus tollens with

(*) if a card X has letter d on the letter side, then the card X has number 3 on the number side

the subject must first see that the negative statement

it is not the case that the fourth card has number 3 on the number side

is a consequence of his or her observations. This, however, according to Kowalski, is not easy for the subject to do because the observation

the fourth card has the number 7 on the number side

does not identify the negative statement as the query the agent must use negation-as-failure reasoning. Kowalski says that this is why so many subjects fail to perform modus tollens.

The conditional as a goal: why subjects perform modus tollens

Kowalski also thinks that the logic programming/agent model explains why subjects do better with "meaningful" conditionals, such as

If a person is drinking in a bar, then the person is at least twenty-one years of old

"[W]e have seen a variety of uses for an agent's conditional goals. Their primary use is to help the agent maintain a harmonious personal relationship with the changing state of the world. However, conditional goals can also serve a secondary function of helping to maintain harmony in a society of agents as a whole. In both cases, conditional goals regulate the behaviour of agents, both generating and preventing actions that change the state of the world. However, conditional goals can also serve a secondary function of helping to maintain harmony in the society of agents as a whole. In both cases, conditional goals regulate the behaviour of agents, both generating and preventing actions that change the state of the world" (Computational Logic and Human Thinking, 225). According to Kowalski, it is natural for the subject to treat this sentence like a conditional goal he is tasked to enforce (similar to an integrity constraint on the world or on society).

This explains why subjects perform modus tollens with these "meaningful" conditionals. If the agent observes an underage person, the agent will need to make sure the antecedent fails. So he will attempt to observe whether the person is drinking in a bar. If the person is drinking in a bar, then the rule the agent is trying to enforce is violated.

In this way, the subjects take

If a person is drinking in a bar, then the person is at least twenty-one years of old

as an integrity constraint

If X is drinking alcohol in a bar and X is under twenty-one years of old, then false

Once the agent observes an underage person, he or she will need to make sure that the person is not drinking. Otherwise there is violation of "integrity."

The conditional as a goal: why subjects fail to peform affirm the consequent

This also helps explains why the subjects do not affirm the consequent. The observation of the truth of the consequent of the rule the agent is trying to enforce does not trigger reasoning.

What we have accomplished in this lecture

We looked at Kowalski's explanation of the results of the Wason Selection Task. He argues that the logic programming/agent model accounts for the observed results.




go back go back