Thinking is Computation
Thinking as Computation 1, 2
Computational Logic and Human Thinking "Introduction," 1
AI (Artificial Intelligence) is the study of intelligence. AI is not the same as cognitive science. Cognitive science studies human cognition. AI studies intelligence more generally.
The focus in this course
In this course, the focus is on understanding the intelligence that characterizes a rational agent.
Human beings are rational agents. Further, they can think about their thinking. So it is not too misleading, at least initially, to think of this course as an exercise in the use of this ability to think about our thinking to design the mind of an agent whose intelligence is human but free from the mistakes human beings make.
• What is it for an agent to be rational?
It is not easy to say in an informative way just what rationality is, but we can make some progress by considering two characterizations that may seem plausible but are open to counterexample. These characterizations are against the background assumption that an agent is rational just in case his or her actions are rational. Now we can ask what is it for an action to be rational.
An action is rational if and only if it accomplishes the intended goal.
This characterizes rationality as effectiveness.
A little reflection shows that this characterization is open to counterexample. Some actions are rational even though they do not accomplish any of the agent's goals, and some actions are not rational even though they do accomplish some of the agent's goals. Suppose, for example, that someone gets the flu shot but still come down with the flu. The agent did not avoid the flu, but in thinking about what we would say about this example, it seems wrong to sat that getting the shot was irrational. This shows that effectiveness is not a necessary condition for rationality.
Effectiveness is not sufficient for rationality either. Suppose, for example, that someone uses his life savings to buy lottery tickets. This is not rational even if he wins.
An action is rational if and only if it would
accomplish the intended goal if the agent's beliefs were true.
This characterizes rationality as subjective.
It may seem plausible that rationality is subjective in this way since actions based on the agent's beliefs would be intelligible, but it seems possible to understand why the agent did something but for the action to be irrational nevertheless. This would be true if the agent's beliefs were irrational. It can be understandable why someone who acts on the basis of crazy beliefs about how the world works does what he does, but this does not make the actions rational. This seems to show that actions are rational only if the underlying beliefs upon which they are based are rational.
This suggests that for an agent to be rational, it is not enough for his actions to be rational. His actions must be rational too.
(It is worthwhile to think about the general form of reasoning employed this discussion of what rationality is. We are evaluating claims (in this case about rationality) by looking to see if they are open to counterexample. The form a potential counterexample takes is relative the claim under consideration. The claims about rationality take the form of "___ if, and only if ___" claims of necessity. To show that such claims are false, we need to show that the statements in the blanks can vary in truth value. To show that, we imagine a situation in which it is plausible to think that one of the statements is true and the other false. This situation is a counterexample. The more plausible the situation, the more plausible it is to think that claim under consideration is false.)
• What is it for an agent to be intelligent?
• What is the connection, if any, between intelligence and rationality?
Intelligence, it seems, is a certain ability to form beliefs that help the agent achieve his goals. It is not easy, again, to say in an informative what just what this ability to form helpful beliefs is, but an example helps to make it a little clearer. Consider an agent who is concerned to avoid certain things in the environment, say coming close to a forest fire. The agent forms beliefs about its environment through perception. If the agent does not observe a fire in its immediate surroundings but reasons forward from its observation that there is smoke ahead to form the belief that the cause of the smoke is fire, then it seems natural to characterize the agent as more intelligent than an agent who lack this ability to reason forward from observations.
The Observation-Thought-Decision-Action Cycle
Rational agents seem to operate in a kind of loop. They look around to see whether things are to their liking, they try to make them better if they are not enough to their liking, and they do this over and over again in a cycle. The thinking or cognition that underlies this cycle falls roughly into two parts with different functions. The aim of epistemic cognition is belief. The aim of practical cognition is action. Practical cognition evaluates the world as represented by the agent's beliefs, selects plans aimed at changing the world, and executes these plans.
(The word 'epistemic' is a rough transliteration of the Greek word ἐπιστήμη, which is often translated into English as 'knowledge.' It was part of the dominant tradition in ancient Greek philosophy to think that a knowledge and understanding of certain aspects of the world is necessary for human beings to orient themselves properly and thus for them to live good lives.)
The observation-thought part of the cycle
To understand the observation-thought-decision-action cycle, it is necessary (but not sufficient) to solve what in philosophy is traditionally called the problem of knowledge of the external world. To maintain itself, the agent must have procedures for accessing the world and for forming reliable beliefs about it. Some basic knowledge about the world may be built in, just as in human beings some basic knowledge about the world may be innate, but in a complex changing environment, neither a human being nor an artificially intelligent agent can be equipped from its inception with all the information it needs. Both must acquire new knowledge about the world by sensing their surroundings. This knowledge must come from perception, and the basic procedures must be built in.
Progress in AI has been slower than anticipated, and part of the reason may be that some of the problems that need solutions are not straightforward engineering problems. The problem of knowledge of the external world is an example. No set of procedures in an artificially intelligent agent counts as a solution to the problem of knowledge unless these procedures for drawing conclusions about the external world are rational, and the question of whether a given procedure for drawing a conclusion is rational is not a question in engineering. It is, or at least is in part, a question in philosophy.
The correct procedures for forming beliefs are difficult to describe in the required detail. Think about perceptual beliefs. Perception is a process that begins with the stimulation of sensors and ends with beliefs about immediate surroundings. However this works in particular cases, it is clear that perception does not always result in true beliefs. A percept P at time t (where P ranges over the percepts possible given the perceptual apparatus of the agent) is (what is sometimes called) a "defeasible reason" to believe that P at t. That is to say, it is possible for the agent subsequently to acquire new information that makes its rational for the agent to withdraw the belief it formed on the basis of the percept. To set out these procedures for belief formation and maintenance is nontrivial.
The decision-action part of the cycle
It is not enough simply to form beliefs. To maintain itself, the agent must act in various ways. Further, at least some of what the agent does must be based on its beliefs about how the world is.
In deciding what to do, the agent first evaluates the world as represented by its beliefs. If evaluation determines that the world is not enough to the liking of the agent, practical cognition evaluates plans to make the world more to the agent's liking. Epistemic cognition comes up with the plans, practical cognition selects a plan and executes it, and the cycle repeats.
(Whether the agent likes its situation seems to depend in part on what the agent believes is true of its situation. So one way for an agent to make the world more to its liking is to change its beliefs about its situation. The agent, in this way, might be happier about his situation if he had false beliefs about it or didn't have beliefs about certain aspects of it at all. One might wonder whether this is rational. The answer, it seems, depends on the case. It doesn't seem rational, say, to undergo hypnosis to become happier by acquiring wildly false beliefs about oneself and the world. In other cases, though, it seems that knowing certain things can make one miserable. In some of these cases, it may be that eliminating these beliefs is rational. Beliefs resulting from trauma might be an example.)
The cycle in the life of a rational agent
This is the life of a rational agent: evaluate, plan, act, and reevaluate. It is easy enough to understand in general, but it is difficult describe the underlying cognitive mechanism in detail.
The Fundamental Epistemic State
What epistemic state is fundamental in the observation-thought part of the cycle? Is it knowledge, belief, or something else
(The prior sections assume without argument that it is belief.)
• What is knowledge?
• What is belief?
These are philosophical questions. Answers to them are controversial, but it is possible to make some progress.
Consider how 'know' is used in English. Sometimes it occurs as a propositional attitude verb, such as in the sentence Tom knows that Socrates died in 399 BC. The word 'believe' is also used as a propositional attitude verb. It is used this way in the sentence Tom believes that Socrates died in 399 BC. So to begin to understand what knowledge is, we can think about the difference in the attitudes that these two verbs express in these sentences. In both sentences,the attitude is toward the same proposition: that Socrates died in 399 BC. This is the proposition Tom is said to know and to believe. The difference in the sentences is in the attitude each ascribes. One sentence says that Tom knows the proposition. The other says that he believes it.
It follows that these two sentences say something different because it is possible for them to differ in truth value. What someone knows he also believes, but it is not the case that what someone believes he also knows. In this way, it is possible for Tom to believe but not know that Socrates died in 399 BC.
Here is another way to express this point. Consider the following two arguments (written in premise/conclusion form):
Tom knows that Socrates died in 399 BC ===== Tom believes that Socrates died in 399 BC Tom knows that Socrates died in 399 BC ===== Socrates died in 399 BC
These arguments are valid. In each case, it is impossible for the conclusion to be false if the premise is true.
The same, however, is not true for these arguments:
Tom believes that Socrates died in 399 BC ====== Tom knows that Socrates died in 399 BC Tom believes that Socrates died in 399 BC ====== Socrates died in 399 BC
These arguments are not valid. In each case, it is possible for the premise to be true and for the conclusion to be false.
With respect to the first invalid argument, because even if the proposition (that Socrates died in 399 BC) is true, it is possible for the premise to be true and the conclusion to be false because it is possible that the subject (Tom) formed the belief in an unacceptable way. Not all beliefs are knowledge. Knowledge, it seems, cannot be just a lucky guess. Knowledge requires that the subject be a special position in respect to the proposition and that the proposition be true. (For some discussion of knowledge and its analysis, see The Analysis of Knowledge in the Stanford Encyclopedia of Philosophy.)
With respect to the second invalid argument, the reason for its invalidity is more obvious. From the fact that someone believes something, it does not follow that what he believes is true.
At this point, then, we can ask and begin to decide whether 'know' is the right word in the assertion in Thinking as Computation that "[t]hinking is bringing to bear what you know on what you are doing" (3). The answer is not at completely clear, but a reasonable case can be made for thinking that what matters to intelligence is what the agent believes, not just what he knows.
Here is an example that seems to show that belief (and not just knowledge) is what matters.
Suppose that someone is considering whether to bring an umbrella when he goes out for the day. To decide, he looks out the window. It looks cloudy to him, so he decides to bring the umbrella. In fact, it is not cloudy. The window is cleverly painted so that it looks like it is cloudy. Since knowledge entails truth, the agent does not know that it is cloudy. He only thinks he knows.
With respect to this example, we can ask what epistemic state he should bring to bear in deciding whether to take an umbrella. The answer, it seems, is belief.
The "Knowledge Base" (KB)
This example shows (or at least suggest) that the "knowledge base" (KB) in an intelligent agent consists in propositions the agent believes. Rational agents represent their circumstances in terms of their beliefs. Some of these beliefs may be knowledge, but belief is the fundamental epistemic state. This, at any rate, will be our assumption in this course.
The Computation on the Knowledge Base
In addition to having beliefs, rational agents use their beliefs in reasoning. The assumption in this course is that reasoning is a form of computation. The assumption is that reasoning is a computational process. In this course, the basic computational process in terms of which we will represent reasoning is backward chaining.
This raises a host of questions. Here answers to some of them.
What is computation?
Computations operate on symbolic structures.
What is a symbolic structure?
The assumption in this course is that what an agent believes is represented symbolically in a "knowledge base" (KB). The representations are stated in the language of the first-order predicate calculus. (We will consider the first-order predicate calculus in more detail later in this course.) Computation operates on these symbolic representations in the agent's "knowledge base" (KB).
Backward chaining computes logical consequence.
(Note that even though we are assuming that belief (not knowledge) is the fundamental epistemic state, we will continue to use the term "knowledge base" (KB).)
The central idea that motivates backward chaining goes all the way back to
Aristotle (385-322 BCE). In his
Nicomachean Ethics III.3.1112b, he observed that
deliberation about how to satisfy a goal is a matter of
working backwards from the goal to something the agent can do. His
idea, roughly, is that as part of having reason and being rational, human beings
form a conception of what the good is. Given this conception, they
form a belief about what the good is in the particular
circumstances in which they find themselves. This belief, in turn, triggers deliberation about
how to acquire this good. Deliberation, in this way, is a what is sometimes called a
"goal-reduction procedure." This procedure of reducing goals to something the agent can do
motivates the idea that underlies backward chaining.
(Backward chaining features prominently in logic programming. Logic programming is a subject for a subsequent lecture. For now, it is enough to note that Stuart J. Russell and Peter Norvig in their Artificial Intelligence, A Modern Approach say that "[logic programming] is the most widely used form of automated reasoning" (287). Artificial Intelligence, A Modern Approach is perhaps the most widely used AI textbook in computer science. When they wrote the book, Russell was a professor in computer science at University of California, Berkely. Norvig was the head of research at Google.)
A simple example helps illustrate the use of backward chaining in a goal-reduction procedure.
For the example, suppose that there are basic actions. These actions are things someone can do without doing something else. Lifting my arm might be an example. I can do that, and it is natural to think that I do it without doing anything else. By contrast, opening a door is not like that. It is not a basic action. I can open a door, but I do it by doing several other things. I grab the knob, twist, and pull the door open. So, in order to open the door, I have to do several other things. That is to say, for it to be true that I open the door, it has to be true that I grab the knob, twist, and pull open the door.
This distinction may be reflected formally. A formula of the form
a ← b, c.
may be understood to say that for a to be true, it is sufficient for b and c to be true. By contrast,
may be understood to say that b is true. Nothing needs to be true for b to be true. Now, given this explanation of the formulas, suppose that an agent has beliefs about actions and their conditions. Suppose that these beliefs are in the knowledge base (KB) or what in the context of logic programming is called a program:
a ← b, c.
a ← f.
b ← g.
If the agent (who has this knowledge base) asks himself whether he can make a true, he can determine the answer by reasoning as follows:
There are two ways for a to be
In one way for a to be true, it is sufficient that both b and c are true. Are b and c both true? To answer, I shall consider them one at a time. Yes, b is true. Yes, c is true too. So the truth of a is a consequence of what I know about the world. I have reasoned backwards from the goal a to things I can do (b and c).
(Here is a more formal description of the "backward chaining" that occurs in the example.
a is called the query. It is posed to the KB. To answer (the question of whether the query is or is not a logical consequence of the KB), backward chaining determines whether the query matches a head of one of the formulas in the KB. In fact, this query matches the head the first formula in the KB. (a is the head in a ← b, c. The tail is b, c.) Given this match, backward chaining now issues in two derived queries, b and c. (The tail provides the derived queries.) These queries are processed last in, first out. b matches the head of the third formula in the KB. Backward chaining issues in no derived query. The remaining query is c. It matches the head of the fifth formula. Again, backward chaining issues in no derived query. Now that there are no more queries, backward chaining returns a positive answer to the query: a is a logical consequence of the KB.)
Now consider the other way for a to be true.
In this way for a to be true, it is sufficient for f to be true. Is f true? No, according to the KB, there are no conditions sufficient for the truth of f.
(Here is the more formal description of the backward chaining. a is the query. It matches the head of the second formula. Backward chaining issues in the derived query f. f matches no head of any formula in the KB. The procedure returns a negative response: f is not a logical consequence of the KB.)
Here is a (slightly more complicated) goal-reduction example in Prolog notation. (Prolog is a computer programming language, which we will occasionally use in this course.)
Suppose in this example that the query is
?- a, d, e.
relative to the knowledge base or program
a:-b, c. (this is Prolog notation for "a ← b, c.")
(Notice that the query and KB are slightly different from the query and KB in the prior example.)
There are three possible computations given the query and the KB. These computations may be understood in terms of the following (upside down) tree whose nodes are the query lists. The commentary beside the tree is for the computation represented in the the leftmost branch.
?- a, d, e. This is the initial query list. We process the list last in, first out. The first query on the list is a. We search | top-down for match with the head of one of the clauses in the KB. / \ ?- b, c, d, e. ?- f, d, e. a matches head of first rule a:-b, c. The tail is pushed on to the query list. / \ | ?-c, d, e. ?-g, c, d, e. ?- d, e. b matches a fact. Facts have no tail, so nothing is pushed onto the query list. | | | ?- d, e. • ?- e. c matches a fact. | | ?- e. ⊥ d matches a fact. | ⊥ e matches a fact. The query list is now empty. a, d, and e are logical consequence of the KB,
The backward chaining process explores one computation at a time until it terminates. The commentary for the above example is for the computation in the leftmost branch. This computation is successful. Further computation to answer the initial query is not necessary, but they are represented in the tree.
Suppose that the fact e was not in the KB. Then the computation fails, and the backward chaining process returns to the most recently encountered choice-point to try a new match for b. (A node in the tree with more than one immediate descendant is a choice-point.) There is another match, so the backward chaining process explores this new computation.
The evaluation of the initial query ends when there remain no choice-points from which to begin a new computation. A computation is said to be successful just in case it ends with the empty query list(denoted above as ⊥). A successful computation confirms that the initial query list is a logical consequence of the program or knowledge base.
The London Underground Example
Consider a real world example, the London Underground example that Robert Kowalski discusses in Computational Logic and Human Thinking (CLHT). The instructions in the emergency notice in the London Underground can be understood to include a goal-reduction procedure. The first sentence
Press the alarm signal button to alert the driver
can be understood as saying that
the goal of alerting the driver reduces to the subgoal of pressing the alarm signal button
If the typical passenger has the "maintenance goal" (we will talk more about maintenance goals later in the course--for now know that they are an important part of the logic programming/agent model)
If there is an emergency, then you deal with the emergency appropriately
and the beliefs
You deal with the emergency appropriately
if you get help
You get help if you alert the driver
then the instructions in the London Underground may be incorporated into the agent's mind in the form of a logic program.
This program functions as the "knowledge" the agent brings to bear on the situation. If the agent observes an emergency, his observation will trigger the antecedent
there is an emergency
of his maintenance goal
If there is an emergency, then you deal with the emergency appropriately
This in turn gives him a goal to achieve, an achievement goal
I deal with emergency appropriately
To achieve this goal, he uses backward chaining to reduce the goal an appropriate subgoal. Given his beliefs, backward chaining results in a plan of action
I alert the driver.
What we have accomplished
At this point, we have a very simple (and clearly incomplete) model of the intelligence of a rational agent. Rational agents have beliefs about the world, and they reason in terms of these beliefs to decide what to do. The KB is a symbolic structure that represents the agent's beliefs. The backward chaining procedure on the KB represents the agent's reasoning.