Thinking is Computation

Thinking as Computation 1, 2
Computational Logic and Human Thinking "Introduction," 1


AI (Artificial Intelligence) is the study of intelligence. AI is not the same as cognitive science. Cognitive science studies human cognition. AI studies intelligence more generally.


Artificial Intelligence

In this course, the focus is on understanding the intelligence that characterizes a rational agent. Human beings are the primary examples of rational agents. So it is not too misleading, at least initially, to think of AI as an attempt to provide the design the mind for an agent whose intelligence is human but who does not make the sorts of mistakes in reasoning that human beings typically make.


Epistemic Cognition (Thinking about What to Believe)

The first problem is to understand what in philosophy is traditionally called the problem of knowledge of the external world. To design artificially intelligent agents, it is necessary (but not sufficient) to solve this problem. It is necessary to set out procedures for accessing the world and for forming reliable beliefs about it. Some basic knowledge may be built in, just as some basic knowledge in human beings may be innate, but in a complex changing environment, neither a human being nor an artificially intelligent agent can be equipped from its inception with all the information it needs. Both must be capable of gathering new information by sensing their surroundings. This knowledge of the world must be inferred from perception, and the basic inference rules must be built in.

It is possible that progress in AI has been slower than anticipated because some of the problems are of a philosophical nature rather than being straightforward engineering problems. An artificially intelligent agent must draw conclusions about the external world that we regard as rational. Drawing conclusions is a process, so this generates an essentially procedural concept of rationality. Rationality should apply to the course of cognition, not to the results of cognition. To know how it applies, we need a theory of rationality. This is a theory in philosophy.

It is not easy to provide a theory of rationality. Think about forming beliefs on the basis of perception. Perception is a process that begins with the stimulation of sensors and ends with beliefs about immediate surroundings. However this works in particular cases, it is clear that perception is not always veridical. As a consequence, it appears that the reasoning from percepts to beliefs must be defeasible. It must be that having a percept P at time t (where P ranges over the percepts possible given the perceptual apparatus of the agent) is a reason to believe that P at t that can be defeated by new information. So at the outset of the project to design an artificially intelligent agent there is three problems of understanding percepts, beliefs, and defeasible reasoning.

(These three problems are difficult enough that we will only touch on them in this course.)


Practical Cognition (Thinking about What to Do)

In addition to thinking about what to believe, there is thinking about what to do.

Practical cognition evaluates the world (as represented by the agent’s beliefs). If practical cognition determines that the world is not good enough from the point of view of the agent, it asks epistemic cognition to supply a plan of action to make the world better. Practical cognition executes the plan, and then practical cognition repeats the cycle.

How to tie epistemic cognition to practical cognition in this way is the second main part of the problem of understanding intelligence. Intelligence is what underwrites the success of a cognitive agent. Evaluate, change, and evaluate again. This is the life of cognitive agent, and intelligence is the mechanism that underwrites the cycle.

The Fundamental Epistemic State

What epistemic state is fundamental for intelligence? Is it knowledge, belief, or something else altogether?

What is knowledge? What is belief? These are philosophical questions. Answers to them are controversial, but it is possible to make some progress.

Consider how 'know' is used in English. Sometimes it occurs as a propositional attitude verb, such as in the sentence Tom knows that Socrates died in 399 BC. The word 'believe' is also used as a propositional attitude verb. It is used this way in the sentence Tom believes that Socrates died in 399 BC. So to begin to understand what knowledge is, we can think about the difference in the attitudes that these two verbs express in these sentences. The attitude is toward the same proposition (that Socrates died in 399 BC is the proposition Tom is said to know and is said to believe), but the attitude itself is different in the two cases. What someone knows he also believes, but it does not follow that what someone believes he also knows.

Here is another way to express this point. Consider the following two arguments

      
      Tom knows that Socrates died in 399 BC
      =====
      Tom believes that Socrates died in 399 BC
      
      
      Tom knows that Socrates died in 399 BC
      =====
      Socrates died in 399 BC     
	

These arguments are valid. In each case, it is impossible for the conclusion to be false if the premise is true. The same, however, is not true for these arguments

	
      Tom believes that Socrates died in 399 BC
      ======
      Tom knows that Socrates died in 399 BC
      
      
      Tom believes that Socrates died in 399 BC
      ======
      Socrates died in 399 BC         
	

These arguments are not valid. In each case, it is possible for the premise to be true and for the conclusion to be false. This is possible because it is possible that the subject (Tom) formed the belief in an unacceptable way. Not all beliefs are knowledge. Knowledge requires that that the subject be a special position in respect to the proposition and that the proposition be true.

At this point, we can ask whether 'know' is the right word in "Thinking is bringing to bear what you know on what you are doing" (TC, 3). The answer is not at all obvious, but a reasonable case can be made for thinking that what matters to intelligence is what the agent believes, not just what he knows. The necessary premise is that he acts on the basis of his beliefs.

Suppose, for example, that someone is considering whether to bring an umbrella when he goes out for the day. To decide, he looks out the window. It looks cloudy to him, so he decides to bring the umbrella. In fact, it is not cloudy. The window is cleverly painted so that it looks like it is cloudy. In this case, since knowledge entails truth, the agent does not know that it is cloudy outside. He only thinks he knows. What epistemic state does he bring to bear in making the decision to take the umbrella? The answer, it seems, is belief.

This suggests that the "knowledge base" (KB) in an intelligent agent really consists in propositions the agent believes, not necessarily propositions he knows. (For some discussion of knowledge and its analysis, see The Analysis of Knowledge in the Stanford Encyclopedia of Philosophy.) This, at any rate, will be our tentative assumption in this course.


The Computation on the Knowledge Base

In addition to having beliefs, rational agents use their beliefs in reasoning. The assumption in this course is that reasoning (and thinking generally) is a form of computation.

What is computation?

Computations operate on symbolic structures.

What is a symbolic structure?

The assumption in this course is that what an agent believes is represented symbolically in a "knowledge base" (KB). The representation of knowledge and belief is stated in the language of first-order predicate calculus. (We will consider the first-order predicate calculus in more detail later in this course.) Computation operates on entries in this "knowledge base" (KB).

(Even if belief, not knowledge, is the right epistemic state, we will continue to talk about a "knowledge base" (KB).)


Backward Chaining

Chapter 2 of TC introduces the logical form of entries in the knowledge base (KB) and the fundamental procedure for operating on the KB.

The logical form of items in the KB is expressed in the language of the first-order predicate calculus. The procedure for operating on the KB is backward chaining. Backward chaining computes logical consequence. These statements about logical form, the first-order predicate calculus, and computing logical consequence can appear forbidding, but the ideas are really pretty straightforward.

The central idea that motivates backward chaining goes all the way back to Aristotle (385-322 BCE). In his Nicomachean Ethics III.3.1112b, he observed that deliberation about how to satisfy a goal is a matter of working backwards from the goal to something the agent can do. His idea, roughly, is that as part of having reason and being rational, human beings form a conception of what the good is. Given this conception, they form a belief about what the good is in the particular circumstances in which they find themselves. This belief, in turn, triggers deliberation about how to bring this good about. Deliberation, in this way, is a goal-reduction procedure. This goal-reduction procedure motivates the idea that underlies backward chaining.

(Backward chaining features prominently in logic programming. Logic programming is a subject for a subsequent lecture. For now, it is enough to note that Stuart J. Russell and Peter Norvig in their Artificial Intelligence, A Modern Approach say that "[logic programming] is the most widely used form of automated reasoning" (287). Artificial Intelligence, A Modern Approach is perhaps the most widely used AI textbook in computer science. Russell is a professor in computer science at University of California, Berkely. Norvig is the head of research at Google.)


• A simple example helps illustrate the use of backward chaining in a goal-reduction procedure.

Suppose that there are basic actions, things that someone can do without doing something else. Lifting my arm might be an example. I can do that, and it is natural to think that I do not do it by doing anything else. By contrast, opening a door is not like that. It is not a basic action. I can open a door, but I do it by doing several other things. I grab the door knob, twist, and pull the door open. So, in order to open the door, I have to do several other things. That is to say, for it to be true that I open the door, it has to be true that I grab the knob, twist, and pull open the door.

This distinction may be reflected formally. A formula of the form

a ← b, c.

may be understood to say that for a to be true, it is sufficient for b and c to be true. By contrast,

b.

may be understood to say that b is true. Nothing needs to be true for b to be true. Now, given this explanation of the formulas, suppose that an intelligent agent has beliefs about actions and their conditions. Suppose that this is represented in following knowledge base (KB) or what in the context of logic programming is called a program:

a ← b, c.
a ← f.
b.
b ← g.
c.
d.
e.

If the agent (who has this knowledge base) asks himself whether he can make a true, he can determine the answer by reasoning as follows:

There are two ways for a to be true.

In one way for a to be true, it is sufficient that both b and c are true. Are b and c both true? To answer, consider them one at a time. Yes, b is true. Yes, c is true too. So the truth of a is a consequence of what I know about the world. I have reasoned backwards from the goal a to things I can do (b and c).

(Here is a more formal description of the backward chaining in the example. a is the query posed to the KB. This query matches the head the first formula in the KB. (a is the head in a ← b, c. The tail is b, c.) Backward chaining issues in two derived queries, b and c. These queries are processed last in, first out. b matches the head of the third formula in the KB. Backward chaining issues in no derived query. The remaining query is c. It matches the head of the fifth formula. Again, backward chaining issues in no derived query. And now there are no more queries. So the procedure returns a positive response: a is a logical consequence of the KB.)

Now consider the other way for a to be true.

In this way for a to be true, it is sufficient for f to be true. Is f true? No, according to the KB, there are no conditions sufficient for the truth of f.

(Here is the more formal description of the backward chaining. a is the query. It matches the head of the second formula. Backward chaining issues in the derived query f. f matches no head of any formula in the KB. The procedure returns a negative response: f is not a logical consequence of the KB.)


Here is a (slightly more complicated) goal-reduction example in Prolog notation. (Prolog is a computer programming language, which we will occasionally use in this course.)

Suppose in this example that the query is

?- a, d, e.

relative to the knowledge base or program

a:-b, c. (this is Prolog notation for "a ← b, c.")
a:-f.
b.
b:-g.
c.
d.
e.
f.

(Notice that the query and KB are slightly different from the query and KB in the prior example.)

There are three possible computations given the query and the KB. These computations may be understood in terms of the following (upside down) tree whose nodes are the query lists. The commentary beside the tree is for the computation represented in the the leftmost branch.

                                                                            

                  ?- a, d, e.                                               initial query

                      |

           /                   \

?- b, c, d, e.               ?- f, d, e.                                    a matches head of first rule (a:-b, c).  The tail is                              
                                                                            pushed on to the query list.
       /        \                 |

?-c, d, e.    ?-g, c, d, e.     ?- d, e.                                    b matches a fact.  Facts have no tail, so nothing is
                                                                            pushed onto the query list.
|                    |            |

?- d, e.             •          ?- e.                                       c matches a fact.

|                                 |
  
?- e.                             ⊥                                         d matches a fact.

|

⊥                                                                           e matches a fact.  The query list is now empty.

The backward chaining process explores one computation at a time until it terminates. It then returns to the most recently encountered choice-point. (A node in the tree with more than one immediate descendant is called a choice-point.) From there it proceeds with a new computation. The evaluation of the initial query ends when there remain no choice-points from which to begin a new computation. A computation is said to be successful just in case it derives the empty query (denoted above as ⊥). A successful computation confirms that the conjunction in the initial query is a logical consequence of the program or knowledge base. So, in the example, backward chaining confirms that a, b, and c are consequences of the knowledge base.



The London Underground Example

Consider a real world example, the London Underground example that Robert Kowalski discusses in Computational Logic and Human Thinking (CLHT). The instructions in the emergency notice in the London Underground can be understood to include a goal-reduction procedure. The first sentence

Press the alarm signal button to alert the driver

can be understood as saying that

the goal of alerting the driver reduces to the subgoal of pressing the alarm signal button

If the typical passenger has the "maintenance goal" (we will talk more about maintenance goals later in the course--for now know that they are an important part of the logic programming/agent model)

If there is an emergency, then you deal with the emergency appropriately

and the beliefs

You deal with the emergency appropriately if you get help
You get help if you alert the driver

then the instructions in the London Underground may be incorporated into the agent's mind in the form of a logic program.


This program functions as the "knowledge" the agent brings to bear on the situation. If the agent observes an emergency, his observation will trigger the antecedent

there is an emergency

of his maintenance goal

If there is an emergency, then you deal with the emergency appropriately

This in turn gives him a goal to achieve, an achievement goal

I deal with emergency appropriately

To achieve this goal, he uses backward chaining to reduce the goal an appropriate subgoal. Given his beliefs, backward chaining results in a plan of action

I alert the driver.












move on move on