Philosophy, Computing, and Artificial Intelligence

PHI 319. Thinking is Computation.

Robert Kowalski. Computational Logic and Human Thinking.
Introduction (14-21) Chapter 1 (22-37).

Hector Levesque. Thinking as Computation.
Chapter 1 (1-5, 11-21), Chapter 2 (23-38).


Artificial Intelligence

AI (Artificial Intelligence) is the study of intelligence. AI is not the same as cognitive science. Cognitive science studies human cognition. AI studies intelligence more generally.

In this course, the focus is on understanding the intelligence that characterizes a rational agent.

Human beings are rational agents, and their intelligence is a primary example of the intelligence of a rational agent that we seek to understand and to model in this course. Furthermore, human beings can think about their own thinking. So it is not too misleading, at least initially, to think of the content of this course as an exercise in the use of this ability to think about our thinking to design the "mind" of a rational agent whose intelligence is human-like but not necessarily human.

The Rationality of an Agent

It is not easy to say in an informative way just what rationality is, but we can make some progress by considering two characterizations that may seem plausible but are open to counterexample. The background assumption for these characterizations is that an agent is rational just in case his or her actions are rational. Now we can ask what is it for an action to be rational.

One possibility is that an action is rational if and only if it accomplishes the intended goal. This account characterizes rational action in terms of the effectiveness of the outcome.

This the method of counterexample. The claims about what rational action is take the form of "___ if, and only if ___" claims of necessity. For such claims to be true, the statements in the blanks on the two sides of the biconditional cannot vary in truth-value. One way to show that the biconditional is false is to imagine a situation in which one of the statements is true and the other is false. The more plausible it is to think the imagined situation is possible, the more plausible it is to think that the situation imagined is a counterexample to the claim under consideration. A little reflection shows that this account is open to counterexample. Some actions are rational even though they do not accomplish any of the agent's goals, and some actions are not rational even though they do accomplish some of the agent's goals. Suppose, for example, that someone gets a flu shot but still comes down with the flu. The agent did not avoid the flu, but in thinking about what we would say about this example, it seems wrong to think that getting the shot was not rational. This shows that effectiveness is not a necessary condition for rational action.

Effectiveness is not sufficient for rationality either. Suppose, for example, that someone uses his life savings to buy lottery tickets. This is not rational even if he wins.

Another possibility is that an action is rational if and only if it would accomplish the intended goal if the agent's beliefs were true. This characterizes rational action as subjective.

Again, a little reflection shows that this account is open to counterexample. Actions the agent takes based on his or her beliefs are intelligible, but it does not follow that such actions are rational. We can understand why someone who acts on the basis of irrational beliefs about how the world works does what he or she does, but the beliefs are irrational nevertheless.

What conclusion can we draw from the failure of these two accounts?

It appears that actions are rational only if the underlying beliefs upon which they are based are rational. So one question has led to another question. Now we need to know what it is for an agent's beliefs to be rational. This question, however, is not easy to answer this question in a general and informative way. The model we consider is only a rough approximation.

The Intelligence of a Rational Agent

Intelligence, it seems, is a certain ability to form beliefs that help the agent achieve his or her goals. It is not easy to say in an informative what this ability to form helpful beliefs is, but an example may make the underlying idea a little clearer. Consider an agent who is concerned to avoid certain things in the environment, say coming close to a forest fire. This agent, we may suppose, forms beliefs about its environment through perception. Now suppose the agent observes the presence of smoke in its immediate surroundings. The agent does not observe a fire, so the observation of smoke alone does not provoke a response. Suppose, however, the agent has the ability to reason from its observation that there is smoke to the belief that the cause of the smoke is fire. Now it has beliefs that can help it achieve its goals, and it seems natural to characterize this agent as more intelligent than an agent who lacks the ability to reason from observations.

The Observation-Thought-Decision-Action Cycle

The word 'epistemic' is a near transliteration of the Greek noun ἐπιστήμη, often translated as 'knowledge.' The dominant philosophical tradition thought that knowledge of certain aspects of the world is necessary for human beings to orient themselves properly and thus for them to live the kind of lives the philosophers understood as good lives. Rational agents seem to operate in a way that forms a cycle. They look around to see whether things are to their liking, they try to make them better if they are not enough to their liking, and they do this over and over again in a cycle. The thinking or cognition that underlies this cycle falls roughly into two parts with different functions. The aim of epistemic cognition is belief. The aim of practical cognition is action. Practical cognition evaluates the world as represented by the agent's beliefs, selects plans aimed at changing the world, and executes these plans.

The observation-thought part of the cycle

To understand the observation-thought-decision-action cycle, it is necessary to solve what in philosophy is traditionally called the problem of knowledge of the external world. To maintain itself, the agent must have procedures for forming reliable beliefs about itself and the world. Some basic knowledge about the world may be built in, just as in human beings some basic knowledge about the world may be innate, but in a complex changing environment, neither a human being nor an artificially intelligent agent can be equipped from its inception with all the information it needs. Both must acquire new knowledge about the world by sensing the surroundings. This knowledge must come from perception, and the basic procedures must be built in.

It may be that part of the reason progress in AI has been slower than anticipated is that some of the problems in need solutions are not straightforward engineering problems. Knowledge of the external world is an example. No set of procedures for forming beliefs about the external world counts as a solution to the problem of knowledge unless these procedures for forming and maintaining beiefs are rational, and the question of whether a given procedure for forming a belief is rational is not a question in engineering. It is, or at least is in part, a question in philosophy.


(Stanford Encyclopedia of Philosophy Artificial Intelligence, 3.1)
A way to answer this question is to think about how we form beliefs. As an example, consider perceptual beliefs. Perception is a process that begins with the stimulation of sensors and ends with beliefs about immediate surroundings. However this works in particular cases, it is clear that perception does not always result in true beliefs. A percept P (where P ranges over the percepts possible given the perceptual apparatus of the agent) is a defeasible reason for the agent to believe that the world is the way the percept represents it to be. This reason is defeasible (can be defeated) because it is possible for the agent to acquire new information that makes its rational to withdraw the belief the agent formed on the basis of the percept. The agent, for example, who sees what looks to be a red object, but subsequently learns that the light is not normal and knows that this can make the object only appear red, should retract his belief that the object is red. This is the rational thing to do, but to set out such procedures for belief generally has proven difficult.

The decision-action part of the cycle

It is not enough simply to have beliefs. To maintain itself, the agent must act in various ways. Further, it seems that at least some of its action will be based on its beliefs about how the world is.

Whether the agent likes its situation depends partly on what the agent believes is true of its situation. So one way for an agent to make the world more to its liking is to change its beliefs about its situation. The agent, in this way, might be happier about his situation if he had false beliefs about it or did not have beliefs about certain aspects of it at all.

Is changing beliefs in this way is rational?

One might think that to undergo hypnosis to become happier by eliminating or acquiring new beliefs about oneself or the world is not rational, but this may depend on the case. Given that some beliefs can make one extremely miserable, it might be that such hypnosis can be rational in some cases. Maybe beliefs resulting from trauma are an example.
For such actions to be rational, the agent must have evaluated the world as represented by its beliefs. If evaluation has the result that the world is not enough to the liking of the agent, the agent forms and evaluates plans to make the world more to its liking. Epistemic cognition comes up with the plans, practical cognition selects a plan and executes it, and the cycle repeats over and over again. This is easy enough to understand in general, but it is difficult describe the underlying cognitive mechanism in the detail required to create an artificially intelligent agent.

The Fundamental Epistemic State

What epistemic state is fundamental in the observation-thought part of the cycle? Is it knowledge, belief, or something else altogether? (The prior sections assume without argument that it is belief.)

What is knowledge? What is belief?

These are philosophical questions. Answers to them are controversial, but it is possible to make some progress by thinking about the words 'know' and 'believe.'

Consider how 'know' is used in English. Sometimes it occurs as a propositional attitude verb. The sentence Tom knows that Socrates died in 399 BCE is an example. The word 'believe' is also used as a propositional attitude verb. The sentence Tom believes that Socrates died in 399 BCE is an example.

To begin to understand what knowledge is, we can think about the difference in the attitudes that these two verbs express in these sentences. In both sentences, the attitude is toward the same proposition: that Socrates died in 399 BCE. This is the proposition Tom is said to know in the first sentence and to believe in the second. The difference in these sentences is in the attitude each ascribes. The first says Tom knows the proposition. The second says he believes it.

These two sentences say different things. Why? Because it is possible for the sentences to differ in truth-value. What someone knows he believes, but what someone believes he does not necessarily know. In this way, it is possible for Tom to believe but not know that Socrates died in 399 BCE.

Another way to express this point is in terms of the following two arguments:

Knowledge entails Belief
1. Tom knows that Socrates died in 399 BCE
----
2. Tom believes that Socrates died in 399 BCE

Knowledge entails Truth
1. Tom knows that Socrates died in 399 BCE
----
2. Socrates died in 399 BCE

These two arguments are valid. In each case, it is impossible for the conclusion to be false if the premise is true. The same, however, is not true for these arguments:

True Belief entails Knowledge
1. Tom believes that Socrates died in 399 BCE
2. Socrates died in 399 BCE
----
3. Tom knows that Socrates died in 399 BCE

Belief entails Truth
1. Tom believes that Socrates died in 399 BCE
----
2. Socrates died in 399 BCE

In each case, it is possible for the premises to be true and for the conclusion to be false.

For discussion of knowledge and its analysis, see The Analysis of Knowledge in the Stanford Encyclopedia of Philosophy. Consider True Belief entails Knowledge. Not all true beliefs are knowledge. It is possible that the subject (Tom) has the true belief (that Socrates died in 399 BCE) but that he formed this belief in an irrational way. Knowledge requires that the subject be a special position in respect to the proposition. The standard way to put this is to say that agent must have "justification" for the belief. So, for example, suppose the subject formed the belief in a dream. In this case, although it is true that Socrates died in 399 BCE, it is false that the subject knows that Socrates died in 399 BCE.

The reason that Belief entails Truth is invalid is more obvious. From the mere fact that the subject (Tom) has the belief that Socrates died in 399 BCE, it does not follow that the belief is true.

"Thinking ... starts with an enormous collection of premises (maybe millions of them) about a very wide array of subjects)" (Thinking as Computation,19).

"[T]hinking means bringing what one knows to bear on what one is doing. But how does this work? How do concrete, physical entities like people engage with something formless and abstract like knowledge? What is proposed in this chapter (via Leibniz) is that people engage with symbolic representations of that knowledge. In other words, knowledge is represented symbolically as a collection of sentences in a knowledge base, and then entailments of those sentences are computed as needed" (Thinking as Computation, 19).

Levesque's understanding of thinking makes it presuppose the existence of knowledge (and hence beliefs).
Given this understanding of knowledge and belief, we can ask and begin to decide whether 'know' is the really right word in Levesque's assertion in Thinking as Computation that "[t]hinking is bringing to bear what you know on what you are doing" (3). His view is "that thinking is a form of computation," that just as "digital computers perform calculations on representations of numbers, human brains perform calculations on representations of what is known" (2), and that the computation is bringing what the agent knows to bear on what the agent is doing.

Is Levesque right? Is knowledge the state in terms of which an agent makes decisions?

How do we decide? One way is to consider possible counterexamples.

Suppose someone is considering whether to bring an umbrella when he goes out for the day. To decide, he looks out the window. It looks cloudy to him, so he decides to bring the umbrella. In fact, it is not cloudy. The window is only painted so that it looks cloudy outside. Since knowledge entails truth, the agent believes but does not know that it is cloudy outside.

This "umbrella" example seems to show that the epistemic state the agent should bring to bear in deciding whether to take an umbrella is belief, not knowledge.

The "Knowledge Base" (KB)

This "umbrella" example shows (or at least suggests) that the "knowledge base" (KB) in an intelligent agent consists in propositions the agent believes. Rational agents represent their circumstances in terms of their beliefs. Some of these beliefs may be knowledge, but belief is the fundamental epistemic state. This, at any rate, will be our assumption in the course for now. Although the assumption for now is that belief (not knowledge) is the fundamental epistemic state, we will continue to use the term "knowledge base" (KB).

The Computation on the Knowledge Base

In addition to having beliefs, rational agents use their beliefs in reasoning. The assumption in this course is that reasoning is a computational process. Further, in this course, the basic computational process in terms of which we will represent reasoning is backward chaining.

"The core idea is that an intelligent agent receives percepts from the external world in the form of formulae in some logical system (e.g., first-order logic), and infers, on the basis of these percepts and its knowledge base, what actions should be performed to secure the agent’s goals" (Stanford Encyclopedia of Philosophy, Artificial Intelligence, 3.2). This raises a host of questions. Here answers to some of them.

What is computation?

Computations operate on symbolic structures.

What is a symbolic structure?

The assumption in this course is that what an agent believes is represented symbolically in a "knowledge base" (KB). The representations are stated in the language of the first-order predicate calculus. (We will consider the first-order predicate calculus in more detail later.) Computation operates on these symbolic representations in the agent's "knowledge base" (KB).

Levesque (in 2.2 of Thinking as Computation) talks about computing "logical entailment" (24). This can be confusing, but don't worry about it for now. The terms 'logical consequence' and 'logical entailment' are used in different ways. What Levesque seems to call "logical entailment, Kowalksi (in Appendix A2 in How to be Artificially Intelligent) seems to call "logical consequence" (267).

We will consider logical consequence and logical entailment in more detail later in the course.
Backward chaining computes logical consequence.

Backward Chaining on the KB


Backward chaining features prominently in what is called "logic programming." Logic programming is a subject for a subsequent lecture.

"[L]ogic programming ... is the most widely used form of automated reasoning" (Stuart J. Russell and Peter Norvig Artificial Intelligence, A Modern Approach, 3rd edition, 9.4.337).

This approach to AI is sometimes called "logic-based AI" (Stanford Encyclopedia of Philosophy, Artificial Intelligence).
The central idea that motivates backward chaining goes all the way back to Aristotle (385-322 BCE). In his Nicomachean Ethics III.3.1112b, he observed that deliberation about how to satisfy a goal is a matter of working backwards from the goal to something the agent can do. Aristotle's idea, roughly, is that as part of having reason, human beings form a conception of what the good is. Given this conception, they form a belief about what is good in the particular circumstances in which they find themselves. This belief, in turn, triggers deliberation about how to bring this good about. Deliberation, in this way, is a "goal-reduction procedure." The idea of reducing goals to something the agent can do is the motivating idea that underlies backward chaining.

A simple example helps illustrate the use of backward chaining in a goal-reduction procedure.

For the example, suppose that there are what we might call basic actions. These are actions someone can do without doing something else. Lifting my right arm above my head might be an example. I can lift my right arm above my head, and I can do it without doing anything else. Because nothing is wrong with my right arm, I don't have to use something, say my left arm, to raise my right arm over my head. Opening a door is not a basic action. I can open a door, but I have to do it by doing several other more basic things. I have to grab the knob, twist, and pull the door open. So, in order to open the door, I do several other things. That is to say, for it to be true that I open the door, it has to be true that I grab the knob, twist, and pull open the door.

This distinction may be reflected formally. A formula of the form

a ← b, c.

may be understood to say that for a to be true, it is sufficient for b and c to be true. By contrast,

b.

may be understood to say that b is true. There is no backward arrow (←) because nothing needs to be true for b to be true. Now, given this explanation of the formulas, suppose that an agent has beliefs about actions and their conditions. Suppose that these beliefs are in the knowledge base (KB) or what in the context of logic programming is called a program:

a ← b, c.
a ← f.
b.
b ← g.
c.
d.
e.

If the agent (who has this knowledge base) asks itself whether it can make a true, it can determine the answer by reasoning as follows:

I realize that there are two ways for a to be true.

Here is a more formal description of the "backward chaining" that occurs: a is the query. It is posed to the KB. To answer (the question of whether the query is or is not a logical consequence of the KB), backward chaining determines whether the query matches a head of one of the formulas in the KB. In fact, this query matches the head the first formula in the KB. (a is the head in a ← b, c. The tail is b, c.) Given this match, backward chaining now issues in two derived queries, b and c. (The tail provides the derived queries.) These queries are processed last in, first out. b matches the head of the third formula in the KB. Backward chaining issues in no derived query. The remaining query is c. It matches the head of the fifth formula. Again, backward chaining issues in no derived query. Now that there are no more queries, backward chaining returns a positive answer to the query: a is a logical consequence of the KB. One way for a to be true is for b and c to be true. Are b and c true? I shall consider them one at a time. Yes, b is true. Yes, c is true too. So the truth of a is a consequence of what I believe about the world. I have reasoned backwards from the goal a to things I can do (b and c).

The other way for a to be true is for f to be true. Is f true? No, according to what I believe (the entries in the KB), there are no conditions sufficient for the truth of f.



Prolog is a computer programming language, which we will occasionally use in this course.
Here is a (slightly more complicated) goal-reduction example in Prolog notation.

Suppose in this example that the query is

?- a, d, e.

relative to the knowledge base

a:-b, c. (this is Prolog notation for "a ← b, c.")
a:-f.
b.
b:-g.
c.
d.
e.
f.

There are three possible computations given the query and the KB. These computations may be understood in terms of the following (upside down) tree whose nodes are the query lists. The commentary is for the computation represented in the the leftmost branch.

                                                                   
                                           ?- a, d, e.  
                  The initial query list is a, d, e. This list is processed 
                  last in, first out. The first query on the list is a. To
                  process a, the KB is searched top-down for match with the 
                  head of one of the clauses. a matches the head of first rule a:-b, c. 
                  The tail (b, c) is pushed on to the query list. Now this
                  query list (b, c, d, e) is processed in the same way.                                                                          
                                                   |                                                                                                                                
                       /                                                      \
            ?- b, c, d, e.                                            ?- f, d, e. 
            b matches a fact (b) in the KB. 
            Facts have no tail, so nothing is                                  
            pushed onto the query list. 
                         |                                                     |                                            
              /                       \                                    
       ?-c, d, e.               ?-g, c, d, e.                         ?- d, e.  
       c matches a fact.                                                              
                |                       |                                      |
        ?- d, e.                      •                                    ?- e.     
       d matches a fact.                               
                |                                                               |
          ?- e.                                                               ⊥                                         
       e matches a fact.    
       The query list is now empty. 
                |
                        
       The computation stops. The 
       initial query is successful.
       a, d, and e are logical 
       consequences of the KB.                                                                           

Each branch is a computation. The backward chaining process explores them one at a time. The first computation (the left most branch) is successful. Further computation to answer the initial query is not necessary, but for illustration they are represented in the tree.

The London Underground Example

Consider a real world example, the "London Underground" example that Robert Kowalski discusses (in Chapter 1 of Computational Logic and Human Thinking). The instructions in the emergency notice in the London Underground can be understood to include a goal-reduction procedure. The first sentence

Press the alarm signal button to alert the driver

can be understood as saying that

the goal of alerting the driver reduces to the subgoal of pressing the alarm signal button



We will talk more about maintenance goals later in the course--for now know that they are an important part of the logic programming/agent model.
If the typical passenger has the "maintenance goal"

If there is an emergency, then you deal with the emergency appropriately

and the beliefs

You deal with the emergency appropriately if you get help
You get help if you alert the driver

then the instructions in the London Underground may be incorporated into the agent's mind in the form of a logic program (in which the beliefs are in the KB).

This program functions as the "knowledge" the agent brings to bear on the situation. If the agent observes an emergency, his observation will trigger the antecedent

there is an emergency

of his or her maintenance goal

If there is an emergency, then you deal with the emergency appropriately

This in turn gives him a goal to achieve, an achievement goal

I deal with emergency appropriately

To achieve this goal, backward chaining reduces the goal an appropriate subgoal. So, given his beliefs, backward chaining results in a plan of action

I alert the driver.

A Dual Process Model of Human Thinking

It seems obvious that human beings do not always explicitly reason in the way set out in the "London Underground" example, and Kowalski does not think otherwise. His view, as I understand it, is that human beings do sometimes reason explicitly in this way but that they also perhaps more frequently employ what he calls "intuitive thinking."


"The agent observes events that take place in the world and the properties that those events initiate and terminate. It uses forward reasoning to derive conclusions of its observations. In many cases, these conclusions are actions, triggered by instinctive or intuitive stimulus-response associations, which can also be expressed in the logical form of conditionals. The agent may execute these actions by reflex, automatically and immediately. Or it may monitor them by performing higher-level reasoning, as in dual process models of human thinking" (Computational Logic and Human Thinking, 20).
"[I]n recent years, cognitive psychologists have developed Dual Process theories, which can be understood as combining descriptive and normative theories. Viewed from the perspective of Dual Process theories, traditional descriptive theories focus on intuitive thinking, which is associative, automatic, parallel and subconscious. Traditional normative theories, on the other hand, focus on deliberative thinking, which is rule-based, effortful, serial and conscious. In this book, I will argue that Computational Logic is a dual process theory, in which intuitive and deliberative thinking are combined" (Computational Logic and Human Thinking, 15).

These "dual process theories" to which Kowalski refers are interesting, but we will not consider them further. We look to human beings as an example, but our aim is not to model human intelligence. It is to design the "mind" of a rational agent whose intelligence is human-like.

What we have Accomplished in this Lecture

At this point, we have constructed a very simple (and clearly incomplete) model of the intelligence of a rational agent. Rational agents have beliefs about the world, and they reason in terms of these beliefs to decide what to do. The KB is a symbolic structure that represents the agent's beliefs. The backward chaining procedure on the KB represents the agent's reasoning.




go back move on