Philosophy, Computing, and Artificial Intelligence

PHI 319. The Fox and the Crow.

Computational Logic and Human Thinking
Chapter 3 (54-64), Chapter 7 (109-122), Chapter 8 (123-135, 139-140)


The Fox and the Crow

 The example of the fox and the crow illustrates the logic programming/agent model. The example is primitive, but it shows some of the promise of the model.

The fox has "beliefs" and a "goal." He uses his beliefs to make a "plan" to achieve the goal.

In the logic programming/agent model, there are two kinds of goals: maintenance and achievement goals. Maintenance goals issue in achievement goals that function as queries to the KB.

The Fox's Beliefs:

x has y if x is near y and x picks up y.
I am near the cheese if the crow has the cheese and the crow sings.
The crow sings if I praise the crow.
The crow has the cheese.

The Fox's Achievement Goal:

I have the cheese.

The problem is to make the achievement goal work together with the beliefs in such a way that the fox can be understood to have the intelligence of a rational agent.

Part of the solution is treat the achievement goal as a query. Still, the query has to be triggered. This happens in part because of the maintenance goal the fox possesses

To see how this works to produce what might be understand as rational action, consider first an older system the logic programming/agent model improves on in certain ways.

The Production System Model

The logic programming/agent model evolved from the production system model.

A production system is a collection of condition-action rules embedded in a cycle. The cycle repeats as follows. It reads an input fact, uses forward chaining (from conditions to actions) to match the fact with one of the conditions of a production rule, verifies the remaining conditions of the rule, and then derives the actions of the rule as candidates to be executed.

A rational agent must have beliefs correlated with states of its environment. It must have dispositions to like or dislike its situation, and it must have cognitive mechanisms to engage in activity that has a tendency to change its immediate environment so that it acquires new beliefs that interact with its dispositions in such a way that it likes its situation better.

The cycle in a production system provides a primitive model of this cognitive behavior.

The Adam and Eve Example

Kowalski's "Adam and Eve" example (Computational Logic and Human Thinking, 117) illustrates the idea that underlies production systems.

The initial facts in the "Adam and Eve" example are:

Eve mother of Cain
Eve mother of Able
Adam father of Cain
Adam father of Able
Cain father of Enoch
Enoch father of Irad

The condition-action rules in the "Adam and Eve" example are:

The condition-action rules in production systems are not conditionals expressed in the language of logic. If X mother of Y, then add X ancestor of Y.
If X father of Y, then add X ancestor of Y.
If X ancestor of Y, and Y ancestor of Z, then add X ancestor of Z.

If the rules are applied until no new facts can be added, then all the consequences of the initial facts will have been added. This behavior in production systems is a forerunner of reasoning in terms of conditionals expressed in the language of logic in the logic programming/agent model.

The Goal Stack in Production Systems

In addition to "drawing consequences," production systems approximate "practical reasoning."

To make this work, there must be a way represent both "ordinary facts" (observations or consequences of observations) and "goal facts." Production systems accommodate these two kinds of "facts" by employing different structures for ordinary facts and for goal facts.

Goals and subgoals are stored in a stack. Goal-reduction is implemented in productions systems by forward chaining with condition-action rules roughly of the form

If goal G and conditions C, then add H as a subgoal

Only the top goal can trigger the action in a condition-action rule. When a goal is reduced to a subgoal, the subgoal is pushed onto the stack. When a goal is "solved," it is popped off the stack.

Condition-action Rules for the Fox and the Crow

Even for relatively simple cognitive behavior, the condition-action rules will be complicated. Here are some of the rules necessary for the "Fox and the Crow" example.

If your goal (at the top of the stack) is to have an object,
and you are not near the object,
then add (by pushing on to the stack) the goal to be near the object.

If your goal (at the top of the stack) is to be near an object,
and you are near the object,
then delete the goal (by popping it off the stack).

If your goal (at the top of the stack) is to have an object,
and you are near the object,
then pick up the object and delete the goal (by popping it off the stack).

Achievement Goals and Maintenance Goals

The logic programming/agent model tries to incorporate the primary insight of production systems: that for rational action, it is necessary to implement a form of the observation-thought-decision-action cycle. In the logic programming/agent model, however, instead of condition-action rules, the conditionals are in the language of logic. In addition, to handle goals as opposed to queries, it incorporates achievement and maintenance goals.

To begin to understand how the observation-thought-decision-action cycle can be incorporates in the logic programming/agent model, a first step is to consider maintenance goals.

Aristotle (384-322 BCE) does not distinguish between maintenance and achievement goals, but he does try explain how thinking can result in action.

"[W]hen you think that every man should walk and you yourself are a man, you immediately walk.... Sometimes one does not stop and consider one of the two premises, namely, the obvious one; for example, if walking is good for a man, one does not waste time over the premise 'I am myself a man.' Hence such things as we do without calculation, we do quickly. For when a man acts for the object which he has in view from either perception or imagination or thought, he immediately does what he desires.... For example, the appetite says [drink]; this is drink, says sensation or imagination or thought, and one immediately drinks. It is in this manner that animals are impelled to move and act..." (On the Movement of Animals VII.70a1).
Maintenance goals function to maintain a relationship with the changing state of the world. In the logic programming/agent model, a maintenance goal takes the form of a conditional

if condition, then achievement goal

When the agent has a belief whose propositional content matches the condition in the antecedent of the maintenance goal, the achievement goal in the consequent is triggered.

Maintenance goals encode relationships with the world that an agent is designed or has evolved to maintain through its behavior. If the agent knows that the relationship fails, the maintenance goal issues in an achievement goal. The achievement goal triggers behavior to achieve the achievement goal. This behavior is an effort to reinstate the relationship with the world.

The states in the world that matter to the life of the agent are encoded in the antecedents of maintenance goals. Consider hunger in animals. When animals are hungry, they tend to move to find food and eat it. In terms of the logic programming/agent model, the conditional

"if I am hungry, I find food and eat it"

is instantiated in the animal so that it functions as a maintenance goal. When the animal registers the truth of the antecedent, the content of the consequent is activated as an achievement goal. The achievement goal moves the animal to take steps to find food and eat it.



To understand more clearly what a maintenance goal is, it is helpful to think about desire in terms of the (ancient Platonic) model of desire in terms of depletion and replenishment. The object of the desire replenishes and thus maintains the agent. (In the example of the fox and the crow, finding and eating food replenishes the fox.) The desire (for food in the fox and crow example) arises because the agent is depleted in a certain way (the fox is hungry). The maintenance goal links the depletion (hunger) to the condition that replenishes (eating) the depletion.
In this way, maintenance goals function to focus the agent on what it important. Not everything in the world matters to the life of the agent. The agent does not just form beliefs about the world. The agent forms beliefs in connection with whether the conditions have been met to trigger a maintenance goal and thus to introduce a goal that the agent needs to achieve.

In the logic programming/agent model, maintenance goals, achievement goals, and the beliefs function in different ways. Beliefs trigger maintenance goals. When maintenance goals are triggered, they issue in achievement goals. These achievement goals function as queries. These queries initiate the construction of plans to bring about achievement goals.

These functional differences between maintenance goals, achievement goals, and the beliefs in the KB are not reflected in their form in the logical programming/agent model. The propositional content of an achievement goal, for example, is a possible state of the world. The same is true for a belief. Their form is the same in so far as the content of both is represented symbolically as formulas in the language of the first-order predicate calculus. The difference between them is in how they function in the logic programming/agent model.

What the Fox is Thinking

Against this background, we can return to the example of the fox and the crow to see in more detail how the observation-thought-decision-action cycle might be incorporates in the logic programming/agent model of the intelligence of a rational agent. The fox's goal of having the cheese ("I have the cheese") is a goal of achieving (bringing about or causing) some future state of the world in order to maintain a certain existence, namely, not being hungry.

Suppose the fox gets an achievement goal because has the following maintenance goal:

If I am hungry, then I have food and I eat the food.

Suppose the fox has the following beliefs:

I have X if I am near X and I pick up X.
I am near the cheese if the crow has the cheese and the crow sings.
The crow sings if I praise the crow.
Cheese is food.

(Note that the example is now slightly different. Given this maintenance goal, the achievement goal is no longer "I have cheese." It is "I have food and I eat the food.")

In this first step, "observation" covers a process about which much more needs to be said. Here, however, we assume that observations are a way of forming beliefs and that it is possible to observe pretty much any positive feature of the world. In addition, we have not said anything about what causes the switch from observation to checking to see if maintenance goals are triggered. Further, we have not said how the agent acquires maintenance goals. These problems need to be solved in a complete model of intelligence. In the first step of the the observation-thought-decision-action cycle in the logic programming/agent model, the fox has the observation

I am hungry

From the maintenance goal, by forward reasoning, the fox gets the the achievement goal

I have food and I eat the food

The logic programming/agent model contains no mechanism to control the switch from backward chaining to observation. This is a problem that needs to be solved. In the second step, suppose that the fox turns once again to making observations. Suppose that the fox observes that

The crow has cheese

The logic programming/agent model contains no mechanism for determining when to reason forward from observations. This is another problem that needs a solution. The fox can reason forward from this new belief he got from observation, or he can reason backward from the subgoals. What is it rational for him to do? It seems that reasoning forward from observations should sometimes have a higher priority, since there might be an emergency or an opportunity that should not be missed. In this case, it is an opportunity. Given his beliefs that "The crow has cheese" and "Cheese is food," the fox can transform his initial goal

I have food and I eat the food

into the goal

I have cheese and I eat the cheese

In the third step, the cognition is not forward reasoning. It is backward reasoning. This is the one thing the model does well. The fox is trying to figure out how to achieve the goal of getting the cheese from the crow. In the backward reasoning, "I have cheese" unifies with the head of the first rule. This introduces the subgoals

I am near the cheese and I pick up the cheese

These subgoals are pushed onto the goal list

I am near the cheese and I pick up the cheese and I eat the cheese

The fox can now, in the fourth step, engage in more backward reasoning to match the subgoal "I am near cheese" to the head of the belief "I am near the cheese if the crow has cheese and the crow sings" to introduce the new subgoals

The crow has cheese and the crow sings

So now the goal list is

The crow has cheese and the crow sings and I pick up the cheese and I eat the cheese

In the fifth step, the fox can match the subgoal "The crow has cheese" with the belief he got from observation. So now the goal list is

The crow sings and I pick up the cheese and I eat the cheese

In the fifth step, the fox can use backward reasoning to match the subgoal "The crow sings" with the head of his belief that "the crow sings if I praise the crow." Now the goal list is

I praise the crow and I pick up the cheese and I eat the cheese

Given the KB, there is no more backward reasoning to do. So the goal list is a plan. Once the fox executes it, given that all goes well, he will have achieved his initial goal.

What we have Accomplished in this Lecture

The logic programming/agent model of the intelligence of a rational agent although now more complete is still clearly too simple to be realistic. It also suffers from certain problems, but it is a start that begins to indicate how one might use logic to implement the intelligence of a rational agent on a machine. Rational agents have beliefs about the world and reason in terms of these beliefs to decide what to do. In terms of the logic programming/agent model, a KB ("knowledge base") is a symbolic structure that represents the agent's beliefs. When beliefs in the KB show the agent the world is a certain way in which the agent has interest, the beliefs trigger a maintenance goal. The maintenance goal issues in an achievement goal.




move on go back