Philosophy, Computing, and Artificial Intelligence

PHI 319. Thinking is Computation.


"A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence," 1955.

The Dartmouth College Artificial Intelligence Conference: The Next Fifty Years, AI Magazine, Volume 27 Number 4, 2006.



Other Historical Documents:

"The Logic Theory Machine," 1963.



To say that "human beings are rational agents" is to attribute to them a certain ability and to say they are subject to criticism if they do not exercise this ability correctly. It is not to say that their beliefs and actions are always rational.
Robert Kowalski. Computational Logic and Human Thinking.
Introduction (14-21) Chapter 1 (22-37).

Hector Levesque. Thinking as Computation.
Chapter 1 (1-5, 11-21), Chapter 2 (23-38).


Artificial Intelligence

AI ("Artificial Intelligence") is the study of intelligence. AI is not the same as cognitive science. Cognitive science studies human intelligence. AI studies intelligence more generally.

In this course, the focus is on understanding the intelligence that characterizes a rational agent.

What is a rational agent? What is the intelligence of a rational agent?

Human beings are rational agents, and their intelligence is a primary example of the intelligence of a rational agent that we seek to understand and to model in this course. Human beings, further, can think about whether their thoughts and actions are correct.

So it is not too misleading, at least initially, to understand the content of this course as an exercise of this ability to think about our thoughts and actions in the effort to design the "mind" of a rational agent whose intelligence is human in certain limited ways.

Can we say more generally what rationality and intelligence are?

The Rationality of an Agent

It may not be possible to give a definition of rational, but we can begin to get a deeper understanding by considering two characterizations that may initially seem plausible but in fact are open to counterexample and hence are false. (This will also provide an introduction to some of the practices and investigative techniques in philosophy.) The background assumption for these characterizations is that an agent is rational just in case his or her actions are rational. Given this assumption, we can think about what it is for an action to be rational.

The claims about what rational action is take the form of "___ if and only if ___" claims of necessity. For such claims to be true, the statements in the blanks on the two sides of the biconditional (the "if and only if") cannot vary in truth-value. So one way to try to show that the biconditional is not true is to imagine a situation in which one of the statements is true and the other is false. The more plausible it is to think the imagined situation is possible, the more plausible it is to think that the situation imagined is a counterexample to the "___ if and only if ___" claim under consideration.

The following are different ways to say the same thing:
 • If P is true, then Q is true
 • The truth of P is sufficient for the truth of Q
 • The truth of Q is necessary for the truth of P.
We might think that (*) an action is rational if and only if (henceforth abbreviated as "iff") it accomplishes the agent's intended goal in performing the action.

This account characterizes rational action in terms of the success of the outcome.

A little reflection, however, shows that this account of rational action is open to counterexample. Some actions are rational even though they do not accomplish the agent's goal, and some actions are not rational even though they do accomplish the agent's goal.

Suppose that someone gets a flu shot. If he comes down with the flu, this itself is no reason to think that getting the shot was not rational. Success, then, as this counterexample shows, is not a necessary condition for rational action. An action might be rational even though it does not accomplish the goal the agent intended in performing the action.

Neither is success a sufficient condition for rational action.

Suppose someone uses his life savings to buy tickets in a lottery. He knows that the probability of a given ticket winning is extremely small, but he thinks that today is his lucky day. Buying the ticket is not rational even if he wins. So again there is a counterexample to (*).

What conclusion about rational action can we draw from these counterexamples?

The following are different ways to say the same thing:
 • If P is true, then Q is true
 • P is true only if Q is true
 • Q is true if P is true




Cognition is subject to evaluation as rational or irrational if it is something the agent can control. So, for example, when some object looks red to me, the looking red is something that happens to me because of how my eyes and brain work. It is not subject to evaluation as rational or irrational. If, however, I form the belief that the object is red, this belief is subject to evaluation. I do not have to form the belief.

In addition to the ability to judge colors by looking, the human mind seems to contain many other modules whose output we can accept or reject. We can get information about how large an object is by looking at it, but we also know that this information is not reliable when the object is far away. How large the object looks is not subject to evaluation as rational or irrational, but a belief about its size formed on this basis is subject to such evaluation.
In thinking about these counterexamples, it appears that rationality consists in thinking correctly and that whether an action is rational is a matter of whether it is the product of such thinking.

What it is to think (or cognize) correctly?

Even if we cannot answer this question, it seems clear that we can sometimes recognize instances of rational and irrational thinking both in ourselves and in others. Otherwise, we would not have been able to imagine the counterexamples. This ability to recognize instances is enough for us to begin to construct a model of the intelligence of a rational agent.

Russell and Norvig's Definition

Russell and Norvig (authors of the standard textbook on AI in computer science) define rationality in terms of what they call a "performance measure."

"For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has" (Artificial Intelligence. A Modern Approach, 37).

As an example, they consider a vacuum cleaner agent. Its environment consists in two locations (A and B) arranged next to each other (A B). A location can be dirty or clean. The agent (the vacuum cleaner) is capable of various actions: it can perceive its current location and whether it is dirty or clean, can suck in its current location, can move to A, and can move to B.

To be a rational agent, as opposed to a mere machine, how should the vacuum cleaner "think"? Here is one possible observation-thought-decision-action cycle:

1. Perceive current location and whether it is dirty or clean
2. If the location is dirty, suck
3. If the location is A, move to B
4. If the location is B, move to A
5. Repeat (go to 1)

Does the vacuum cleaner possess concepts? It can discriminate between dirty and clean. Is this enough for it to have concepts of dirty and clean?

What is it for something to have a concept?

One answer is that for discrimination to show the presence of a concept, the discrimination must have consequences for what the discriminator should be able to think.

"We might train a parrot reliably to respond differentially to the visible presence of red things by squawking 'That’s red.' It would not yet be describing things as red, would not be applying the concept red to them, because the noise it makes has no significance for it. It does not know that it follows from something’s being red that it is colored, that it cannot be wholly green, and so on. Ignorant as it is of those inferential consequences, the parrot does not grasp the concept..." (Robert Brandom, "How Analytic Philosophy Has Failed Cognitive Science").
Is this "thinking" rational or irrational? Or is this "thinking" not thinking at all?

We can, certainly, say whether the vacuum cleaner agent does what we want it to do.

If we want the vacuum cleaner to keep A and B clean, then this "thinking" makes it do what we want. So, as a result, we might appraise the vacuum cleaner positively. We might say that it is a good vacuum cleaner and that it performs its function well.

Compare this vacuum cleaner to one whose observation-thought-decision-action cycle is

1. Perceive current location and whether it is dirty or clean
2. Suck
3. If the location is A, move to B
4. If the location is B, move to A
5. Repeat (go to 1)

This vacuum cleaner agent keeps the two locations clean. So in this respect its performance is no worse than the previous agent, but it sometimes engages in what we might think is wasted action. It sucks its current location whether or not the location is dirty.

Does this make the second vacuum cleaner agent less rational than the first?

The Intelligence of a Rational Agent

What about the intelligence of a rational agent? Can we say what that is?

"Consider the lowly dung beetle. After digging its nest and laying its eggs, it fetches a ball of dung from a nearby heap to plug the entrance. If the ball of dung is removed from its grasp en route, the beetle continues its task and pantomimes plugging the nest with the nonexistent dung ball, never noticing that it is missing. Evolution has built an assumption into the beetle’s behavior, and when it is violated, unsuccessful behavior results. Slightly more intelligent is the sphex wasp. The female sphex will dig a burrow, go out and sting a caterpillar and drag it to the burrow, enter the burrow again to check all is well, drag the caterpillar inside, and lay its eggs. The caterpillar serves as a food source when the eggs hatch. So far so good, but if an entomologist moves the caterpillar a few inches away while the sphex is doing the check, it will revert to the 'drag' step of its plan and will continue the plan without modification, even after dozens of caterpillar-moving interventions. The sphex is unable to learn that its innate plan is failing, and thus will not change it" (Artificial Intelligence. A Modern Approach, 39). Intelligence in a rational agent, it seems, consists in the ability the agent possesses to form beliefs by drawing conclusions in certain circumstances that help it achieve its goals.

An example may make this a little clearer.

Consider an agent with a goal to avoid certain things in the environment, say coming close to a fire. Suppose that this agent forms beliefs about its environment through perception. Further, suppose that this agent observes the presence of smoke in its immediate surroundings. The agent observes smoke but does not observe a fire, so the observation of smoke itself does not provoke a response. Suppose, however, the agent has the ability to reason from its observation that there is smoke to the conclusion that a fire is the cause of this smoke. The ability to form beliefs in this way can help it achieve its goals. Further, compared to an agent who lacks this ability to reason to causes from what it observes, this agent clearly seems more intelligent.

The particular form of intelligence in the "fire" example consists in an ability to engage in a certain kind of reasoning. Human rational agents have this intelligence, but they have others too. In this course, we will think about some of them and whether we can use certain techniques in logic to represent them in sufficient detail to implement them on a machine.

This approach means that to represent forms of intelligence, we need to understand certain elementary parts of logic. The understanding we need is the subject of the next lecture. This is not a course in logic, so we will not worry about many of the more technical details.

The Observation-Thought-Decision-Action Cycle

The discussion of intelligence in a rational agent brings to light what in hindsight seems obvious: that rational agents try to determine whether things are to their liking, that they try to make them better if they are not, and that they do this over and over again in a cycle.

The thinking that underlies this cycle falls roughly into two parts with different functions. The aim in one part is to form beliefs about how the world is. We can call the thinking that discharges this function "epistemic" cognition. The word 'epistemic' is a near transliteration of the Greek noun ἐπιστήμη (which is often translated into English as 'knowledge.') In ancient Greek philosophy, the dominant philosophical tradition thought that knowledge of certain aspects of the world is necessary for human beings to orient themselves properly and thus for them to live the kind of lives the ancient philosophers understood as good lives. The aim in the other part is to evaluate the world as represented by these beliefs, select plans aimed at changing the world, and execute these plans. We can call the thinking that discharges this function "practical" cognition.

This cycle is the basic form that intelligence takes in a rational agent. All other forms of intelligence fit somewhere in this cycle. So, in the "fire" example, the belief that there is smoke and the conclusion that there is fire is a function of epistemic cognition.

The observation-thought part of the cycle

To understand the observation-thought part of the cycle, it is necessary to solve several problems. For epistemic cognition, one of them is traditionally called

• the problem of knowledge

To act to maintain themselves and whatever else they do, it seems that rational agents must have the ability to acquire beliefs about themselves and about the environment in which they exist. This is necessary because how the agent acts depends on how it thinks the world is.

For now, we can say that the beliefs on which the agent relies are "knowledge."

Some knowledge about the environment may be built in, just as in human beings some knowledge may be innate, but in a world with a changing environment, it seems that neither a human being nor an artificially rational agent can be equipped from its inception with all the information it needs. Both must acquire new beliefs by sensing its surroundings.

"A computer program capable of acting intelligently in the world must have a general representation of the world in terms of which its inputs are interpreted. Designing such a program requires commitments about what knowledge is and how it is obtained. Thus, some of the major traditional problems of philosophy arise in artificial intelligence" (John McCarthy & Patrick J. Hayes, "Some Philosophical Problems from the Standpoint of Artificial Intelligence," 1969). This may help explain why progress in AI has been slower than some have anticipated. Not all the problems are straightforward engineering problems. The problem of knowledge is an example. No set of procedures for getting information about the environment is a solution unless they are procedures for forming and maintaining rational beliefs, and whether a procedure for getting information has this property is at least partly a question in philosophy.




"Logical AI involves representing knowledge of an agent’s world, its goals and the current situation by sentences in logic. The agent decides what to do by inferring that a certain action or course of action was appropriate to achieve. The inference may be monotonic, but the nature of the world and what can be known about it often requires that the reasoning be nonmonotonic" (John McCarthy, "Concepts of Logical AI," 2000).
As an example of forming and maintaining beliefs that can result in knowledge, consider the way human beings form rational beliefs about the world from perception.

Perception is a process that begins with the stimulation of sensors of some sort and ends with beliefs about immediate surroundings. However all the details in the process from sensation to belief are worked out, it is clear that perception does not always result in true beliefs. A perceptual belief P (where P is a variable that ranges over the beliefs possible given the perceptual apparatus of the agent) is defeasible. That is to say, it is possible for the agent to acquire new information that makes it rational to withdraw the belief it formed from the perception. For example, it is rational for a human agent who sees what looks to be a red object to form the belief that the object is red. Suppose, however, he subsequently becomes aware of the possibility that the light in the circumstances was not normal. Further, suppose that he believes that light can make an object that is not red appear red. In these circumstances, it is rational for him to retract his belief that the object is red. He need not retract it. He has other options, but retracting the belief in these circumstances is something he is permitted to do.

It has proven difficult to set out general procedures for forming and maintaining defeasible beliefs in the detail necessary to implement the procedure on a machine. Later in this course, we will consider some attempts to describe particular forms of defeasible reasoning.

The decision-action part of the cycle

Of course it is not enough for an agent simply to observe the world and to form beliefs about how the world is. To maintain itself, the agent must act in various ways. Further, at least some of its actions must be based on the beliefs it formed about how the world is.

Whether the agent likes its situation depends on what it believes is true of its situation. So one thing an agent can do to make it like its situation is to change its beliefs. We usually change our beliefs to track changes in the world, but we can instead use non-rational means to abandon a belief.

Is it rational for an agent to use non-rational means to abandon a belief so that it likes its situation more?

The answer may depend on the case. Some beliefs can make a person extremely miserable, so it might be rational to undergo hypnosis, say, to get rid of them.
In the case of such actions, the agent will have evaluated the world as represented by its beliefs. If evaluation shows that the world is not enough to its liking, the agent will typically form and evaluate plans to make the world more to its liking. On the basis of this evaluation, the agent will select one of these plans and execute it. At this point, the cycle will repeat.

How does an agent evaluate the world?

Earlier we said that the agent determines whether it likes or dislikes the way the world is.

How does an agent order its likings and dislikings?

It may seem that this should be easy to answer because we recognize that we, as rational agents, engage in such thinking, but, as with so much about the intelligence of a rational agent, it has proven difficult describe the underlying cognitive mechanism in the detail required to create an artificially rational agent. We consider some attempts later in this course.

The Thought Part of the Cycle

First we need to get straight on the "knowledge" on which the agent relies. This "knowledge" is how the agent thinks the world is. The question is whether it is knowledge or belief.

What is knowledge? What is belief?

The questions what is knowledge and what is belief are philosophical questions. Answers to them are controversial, but it is possible to make some progress by thinking about how we use the words 'know' and 'believe' in ordinary English language sentences.

Consider how we use 'know.' Sometimes we use it as a propositional attitude verb. The sentence Tom knows that Socrates died in 399 BCE is an example. We also use the word 'believe' as a propositional attitude verb. Tom believes that Socrates died in 399 BCE is an example.

To begin to understand what knowledge is, we can think about the difference in the attitudes these two verbs express in these sentences. In both sentences, the attitude is toward the same proposition: that Socrates died in 399 BCE. What propositions are is a difficult issue, but there are straightforward constructions in English that allow us to talk about them. When we nominalize a declarative sentence, we form a phrase that can be used as a subject in a sentence. So, for example, we can nominalize

"Socrates died in 399 BCE"

to form

"that Socrates died in 399 BCE."

Now we can use this phrase to say things about the proposition. We can say

"That Socrates died in 399 BCE is true"

or

"That Socrates died in 399 BCE is something historians believe is true."

Given this much, we can understand that certain verbs in English are propositional attitude verbs. Examples are 'knows,' 'believes,' 'fears,' 'hopes,' and so on. We can say of someone that he or she knows that Socrates died in 399 BCE, believes that he died in 399 BCE, and so on.



Arguments have premises and a conclusion. Declarative sentences express the premises and conclusion. An argument is valid iff it is impossible for the conclusion to be false if the premises are true. An argument is sound iff it is valid and its premises are true. So the conclusion is true in a valid argument whose premises are true.
This is the proposition Tom is said to know in the first sentence and to believe in the second. The difference in these sentences is in the attitude each ascribes to the subject. The first says he knows the proposition. The second says he believes it.

The two sentences say different things. This follows from the fact that it is possible for the sentences to differ in truth-value. What someone knows he believes, but what someone believes he does not necessarily know. So it is possible for the sentences to differ in truth-value because it is possible for Tom to believe but not to know that Socrates died in 399 BCE.

Another way to express this point is in terms of the following arguments:

Knowledge entails Belief
1. Tom knows that Socrates died in 399 BCE
----
2. Tom believes that Socrates died in 399 BCE

Knowledge entails Truth
1. Tom knows that Socrates died in 399 BCE
----
2. Socrates died in 399 BCE

These two arguments are valid. In each case, it is impossible for the conclusion to be false if the premise is true. The same, however, is not true for these arguments:

True Belief entails Knowledge
1. Tom believes that Socrates died in 399 BCE
2. Socrates died in 399 BCE
----
3. Tom knows that Socrates died in 399 BCE

Belief entails Truth
1. Tom believes that Socrates died in 399 BCE
----
2. Socrates died in 399 BCE

In each case, it is possible for the premises to be true and the conclusion to be false.

For discussion of knowledge and its philosophical analysis in terms of justification, belief, and truth, see The Analysis of Knowledge in the Stanford Encyclopedia of Philosophy. The reason the argument True Belief entails Knowledge is invalid is that a true belief need not be knowledge. It is possible that the subject (Tom) has a true belief (that Socrates died in 399 BCE) but does not have knowledge because his belief is not justified.

Why does "justified" mean here?

Knowledge requires that the subject be a special position with respect to the proposition he knows. Lucky guesses are not knowledge, and a standard way in philosophy to express this is to say that the agent must have There are different kinds of justification depending on the domain of normativity in question. In the case of beliefs, the justification is relative to the domain of rationality. So the justification necessary for knowledge is rational justification, as opposed, say, to moral justification. Rational beliefs are the product of correct thinking. They are formed rationally. "justification" for the belief for it to be knowledge.

To understand this, suppose that the subject (Tom) has the belief on the basis of a dream. In this case, when we think about it, it seems very plausible to say that he does not know that Socrates died in 399 BCE even if the proposition is true. If we ask ourselves why, the reason is that he did not form the belief in a correct way. Beliefs about the distant past formed on the basis of a dream are not knowledge even if they happen to be true. If someone questions Tom about why he believes that Socrates died in 399 BCE, he does not show he has justification for his belief by saying that he saw Socrates drink the hemlock in a dream. His belief is not knowledge.

The reason that Belief entails Truth is invalid is more obvious. From the mere fact that Tom has the belief that Socrates died in 399 BCE, it does not follow that the belief is true.

"Thinking ... starts with an enormous collection of premises (maybe millions of them) about a very wide array of subjects)" (Levesque, Thinking as Computation, 19).

"[T]hinking means bringing what one knows to bear on what one is doing. But how does this work? How do concrete, physical entities like people engage with something formless and abstract like knowledge? What is proposed in this chapter (via Leibniz [polymath, 1646-1716]) is that people engage with symbolic representations of that knowledge. In other words, knowledge is represented symbolically as a collection of sentences in a knowledge base, and then entailments of those sentences are computed as needed" (Levesque, Thinking as Computation, 19).

Levesque's understanding of thinking makes it presuppose knowledge (and hence beliefs since knowledge entails belief).
Given this understanding of knowledge and belief, we can ask and begin to decide whether 'know' is the really right word in Levesque's assertion in Thinking as Computation that "[t]hinking is bringing to bear what you know on what you are doing" (3).

Levesque's view is "that thinking is a form of computation," that just as "digital computers perform calculations on representations of numbers, human brains perform calculations on representations of what is known" (2), and that the calculation performed consists in bringing what the human being knows to bear on what the human being is doing.

Is Levesque right? Is knowledge the state in terms of which a rational agent makes decisions?

How do we determine whether Levesque is right?

One way (and perhaps the only way at present) is to consider possible counterexamples.

Suppose someone is considering whether to carry an umbrella when he goes out for the day. To decide, he looks out the window. It looks cloudy to him, so he decides to bring an umbrella. In fact, it is not cloudy. The window is only painted so that it looks cloudy outside. Since knowledge entails truth, he believes but does not know that it is cloudy outside.

This "umbrella" example seems to show that the state the agent uses to represent the world and brings to bear in deciding what to do is belief, not knowledge. The agent thinks it is likely to rain because he believes it is cloudy outside. His belief is false, but it is rational.

The "Knowledge Base" (KB)

This has implications for how we understand the Observation-Thought-Decision-Action Cycle. Recall that we have said that part of the intelligence of a rational agent consists in acting against the background of a representation of the world. It is traditional in AI to call this representation a "knowledge base" (KB) and to suppose that it contains propositions.

The "umbrella" example shows (or at least suggests) that the "knowledge base" (KB) in our model of the intelligence of a rational agent should consist in propositions the agent believes. Rational agents represent their circumstances in terms of their beliefs. Some of these beliefs may be knowledge, but belief, it seems, is what the agent uses to represent the world.

This, at any rate, will be our assumption in the course until further notice. Although the assumption for now is that belief (not knowledge) is what the agent uses to represent the world, we will continue to use the term "knowledge base" (KB).

The Computation on the Knowledge Base

In addition to having beliefs, rational agents use their beliefs in reasoning. (We saw an example in the "fire" example.) The assumption in this course is that this reasoning is a computational process. Further, in this course, a basic part of the algorithm in terms of which we will represent reasoning is what is we, following tradition, will call backward chaining.

"The core idea is that an intelligent agent receives percepts from the external world in the form of formulae in some logical system (e.g., first-order logic), and infers, on the basis of these percepts and its knowledge base, what actions should be performed to secure the agent’s goals" (Stanford Encyclopedia of Philosophy, Artificial Intelligence, 3.2).

We can say that a process is computational iff there is an algorithm to compute it. We know, for example, that the addition of two integers and certain other mathematical operations on numbers are computational processes because we know there are algorithms to compute them.

What is an algorithm?

This is a harder question to answer, but here is an easy to understand example for addition. Suppose we represent numbers with hash marks, so | for 1, || for 2, and so on. Then concatenation is an algorithm we can use to compute addition. To determine what number | + || is, we concatenate | and || to make |||.
This raises a host of questions.

What is a computation?

Computations in this course operate on symbolic structures.

What is a symbolic structure?

The agent's beliefs are propositions in its "knowledge base" (KB). An assumption in this course is that these propositions correspond to sentences in a formal language. In AI, this formal language is traditionally the language of the first-order predicate calculus. The sentences in this formal language are symbolic structures. (We will consider these structures in the first-order predicate calculus in more detail in the next lecture.) It is an assumption of this course that reasoning corresponds to computations that operate on the sentences in the formal language. These sentences, in turn, are the propositions in the agent's "knowledge base" (KB).

What is the operation or operations in these computations?

"[L]ogic programming ... is the most widely used form of automated reasoning" (Stuart J. Russell and Peter Norvig Artificial Intelligence, A Modern Approach, 3rd edition, 9.4.337). This approach to AI is sometimes called "logic-based AI" (Stanford Encyclopedia of Philosophy, Artificial Intelligence). Logic-based AI is the most straightfoward approach to AI to consider from within philosophy because traditionally logic is part of the philosophy major. In the logic programming/agent model, backward chaining is the fundamental operation.

What does this operation do?


Levesque (in 2.2 of Thinking as Computation) talks about computing "logical entailment" (24). For now, it is not necessary to worry about the difference between logical consequence and logical entailment. We will consider the difference in more detail in the next lecture.
It computes logical consequence.

What is logical consequence? How does backward chaining compute logical consequence?

These questions take some time to answer. For now, we will consider an example.

Backward Chaining on the KB

The idea that motivates backward chaining goes all the way back to Aristotle (385-322 BCE). He observed that deliberation about how to satisfy a goal is a matter of working backwards in thought from the goal to something the person takes him or herself to be able to do. Aristotle's idea, roughly, is that as part of having reason, human beings form goals for the things they believe are good. Further, in order to take action, they deliberate about what to do to achieve these goods in the circumstances in which they find themselves. Deliberation, in this way, is a "goal-reduction procedure." It is procedure to reduce the goal to something one can do.

A simple example helps illustrate the use of backward chaining in a goal-reduction procedure.

For the example, suppose that there are basic actions. These are actions someone can do without doing something else. Lifting my right arm above my head might be an example. I can lift my right arm above my head, and I can do it without doing anything else. Because my right arm is not impaired in any way, I do not have to use something, say my left arm, to raise my right arm over my head. By contrast, opening a door is not a basic action. I can open a door, but I have to do it by doing several other more basic things. I have to grab the knob, twist, and pull the door open. So, in order to open the door, I do several other things. That is to say, for it to be true that I open the door, it has to be true that I grab the knob, twist, and pull open the door.

This general idea may be reflected formally. A formula of the form

a ← b, c.

may be understood to say that for a to be true, it is sufficient for b and c to be true. By contrast,

d.

may be understood to say that d is true. There is no backward arrow (←) in this second formula (in contrast to the first formula) because we are saying simply that d is true.

Now, given this explanation of the formulas, suppose that an agent has beliefs about the conditions sufficient for various possible actions. Suppose that these beliefs are in the knowledge base (KB) or what in the context of Why is the KB called a program? The idea is that given an input, the KB can be "run" in order to produce an output. The input to the KB is called a query. Backward chaining occurs when the KB is run with this input. The output is positive if the input is a logical consequence of the KB. logic programming is called a program:

a ← b, c.   "I can make a true if I can make b and can make c true"
a ← f.
b.             "I can make b true"
b ← g.
c.
d.
e.

The program is on the left.

Here is a more formal description of the backward chaining that occurs in example. a is the query. It is posed to the KB. To answer the question of whether the query is a logical consequence of the KB, backward chaining occurs. The first step in the algorithm is to determine whether the query matches a head of one of the formulas in the KB. Given the KB in the example, the query matches the head of the first formula in the KB. (a is the head in the formula a ← b, c. The tail is b, c.) Given this match, backward chaining now issues in two derived queries, b and c. (The tail provides the derived queries.) These queries are processed last in, first out (LIFO). b matches the head of the third formula in the KB. There is no derived query. The remaining query is c. It matches the head of the fifth formula. There is no derived query. Now there are no more queries to process, so backward chaining stops and a positive answer is returned to the query: a is a logical consequence of the KB. If the agent asks itself whether "I can make a true" is logical consequence of what it believes, it can "deliberate" to determine the answer by reasoning roughly as follows:

There are two ways I can make a true.
(These ways are the first two entries in the KB.)
The first way is that I can make b and can make c true.
I can make b, and I can make c true.
(These facts are the third and fifth entries in the KB.)
So "I can make a true" is a logical consequence of what I believe.

Someone who goes through this reasoning is reasoning backwards from the goal (a) to basic actions (b and c) that are sufficient for the truth of the goal.

An Example in Prolog Notation



Prolog is a computer programming language we will occasionally use in this course. We will consider how Prolog works in more detail in the next lecture. Our goal is not to be become Prolog programmers. This is not a course in computer science. We consider Prolog only to show how one form of intelligence (computing logical consequence) in a rational agent can be implemented on a machine.
Here is a (slightly more complicated and abstract) goal-reduction example in Prolog notation.

Suppose in this example the query

?- a, d, e.

is put to the following KB or program

a:-b, c. (this is Prolog notation for "a ← b, c.")
a:-f.
b.
b:-g.
c.
d.
e.
f.

There are three possible computations given the query and the KB. These computations may be understood in terms of the following (upside down) tree whose nodes are the query lists. The commentary is for the computation represented in the the leftmost branch.

                                                                   
                                           ?- a, d, e.  
                  The initial query list is a, d, e. This list is processed 
                  last in, first out. The last query pushed onto the list is a. To
                  process a, the KB is searched top-down for a match with the 
                  head of one of the clauses. a matches the head of first rule a:-b, c. 
                  The tail (b, c) is pushed on to the query list. Now this
                  query list (b, c, d, e) is processed again.                                                                          
                                                   |                                                                                                                                
                       /                                                      \
            ?- b, c, d, e.                                            ?- f, d, e. 
            b matches a fact (b) in the KB. 
            Facts have no tail, so nothing is                                  
            pushed onto the query list. 
                         |                                                     |                                            
              /                       \                                    
       ?-c, d, e.               ?-g, c, d, e.                         ?- d, e.  
       c matches a fact.                                                              
                |                       |                                      |
        ?- d, e.                      •                                    ?- e.     
       d matches a fact.                               
                |                                                               |
          ?- e.                                                               ⊥                                         
       e matches a fact.    
       The query list is now empty. 
                |
                        
       The computation stops. The 
       initial query is successful.
       a, d, and e are logical 
       consequences of the KB.                                                                           

Each branch is a possible computation. The backward chaining process explores them one at a time. The first computation (the left most branch) is successful. Further computation to answer the initial query is not necessary, but for illustration it is represented in the tree.

This tree raises several questions.

For now, just make sure you understand how to do the computations.

In the next lecture, we will consider in more detail how the backward chaining algorithm computes logical consequence, what logical consequence is, and how the form of intelligence backward chaining implements functions in the life of a rational agent.

The London Underground Example

The above examples are artificial, but there are real world examples. Consider the "London Underground" example that Robert Kowalski discusses (in Chapter 1 of Computational Logic and Human Thinking). The instructions in the emergency notice in the London Underground can be understood to include a goal-reduction procedure. The first sentence

Press the alarm signal button to alert the driver

can be understood as saying that

the goal of alerting the driver reduces to the subgoal of pressing the alarm signal button



We will talk more about maintenance goals later in the course--for now it is enough know that they are an important part of the logic programming/agent model.
If the typical passenger has the "maintenance goal"

If there is an emergency, then I deal with the emergency appropriately

and the beliefs

I deal with the emergency appropriately if I get help
I get help if I alert the driver

then the instructions in the London Underground may be incorporated into the agent's mind in the form of a logic program (in which the beliefs constitute the KB).

This program functions as the "knowledge" the agent brings to bear on the situation. If the agent observes an emergency, his observation will trigger the antecedent

there is an emergency

of his or her maintenance goal

If there is an emergency, then I deal with the emergency appropriately

We will talk more about achievement goals later in the course--for now we can think of them as queries to the KB. This in turn gives the agent a goal to achieve, an achievement goal

I deal with emergency appropriately

To achieve this goal, backward chaining reduces the goal an appropriate subgoal. So, given the beliefs in the KB, backward chaining results in a plan of action

I alert the driver.

A Dual Process Model of Human Thinking

It seems obvious that human beings do not always explicitly reason in the way set out in the "London Underground" example, and Kowalski does not think otherwise. His view, as I understand it, is that human beings do sometimes reason explicitly in this way but that they also perhaps more frequently employ what he calls "intuitive thinking."


Kowalksi shows what he calls "deliberative thinking" in the top of the circle that represents the "mind." He shows what he calls "intuitive thinking" at the bottom.
"[I]n recent years, cognitive psychologists have developed Dual Process theories, which can be understood as combining descriptive and normative theories. Viewed from the perspective of Dual Process theories, traditional descriptive theories focus on intuitive thinking, which is associative, automatic, parallel and subconscious. Traditional normative theories, on the other hand, focus on deliberative thinking, which is rule-based, effortful, serial and conscious. In this book, I will argue that Computational Logic is a dual process theory, in which intuitive and deliberative thinking are combined" (Kowalski, Computational Logic and Human Thinking, 15).

These "dual process theories" to which Kowalski refers are probably necessary for understanding rationality in human beings, but we will not much consider how to implement them on a machine. In this course, our aim is much more modest. It is to model some relatively simple forms of intelligence that might be part of the mind of some rational agent.

What we have Accomplished in this Lecture

We have constructed a very simple and clearly incomplete model of a rational agent. Rational agents have beliefs about the world, and they reason in terms of these beliefs to decide what to do. The KB is a symbolic structure that represents the agent's beliefs. The backward chaining algorithm on the KB implements the agent's reasoning about logical consequence.




go back move on