Philosophy, Computing, and Artificial Intelligence

PHI 319

Computational Logic and Human Thinking
Chapter 3 (54-64), Chapter 7 (109-122), Chapter 8 (123-135, 139-140)


The Fox and the Crow

 The example of the fox and the crow illustrates the logic programming/agent model. The example is primitive, but it shows some of the promise of the model.

The fox has "beliefs" and a "goal." He uses his beliefs to make a "plan" to achieve the goal.

There are two kinds of goals: maintenance goals and achievement goals. Maintenance goals are incorporated into the logic programming/agent model in such a way that (against the background of certain beliefs) they issue in achievement goals that function as queries to the program.

The Fox's Beliefs:

x has y if x is near y and x picks up y.
I am near the cheese if the crow has the cheese and the crow sings.
The crow sings if I praise the crow.
The crow has the cheese.

The Fox's Achievement Goal:

I have the cheese.

The problem is to make the achievement goal work together with the beliefs in such a way that the fox can be understood to have the intelligence of a rational agent.

Part of the solution is treat the achievement goal as a query. Still, the query has to be triggered.

The Production System Model

The logic programming/agent model evolved from the production system model.

A production system is a collection of condition-action rules embedded in a cycle. The cycle repeats as follows. It reads an input fact, uses forward chaining (from conditions to actions) to match the fact with one of the conditions of a production rule, verifies the remaining conditions of the rule, and then derives the actions of the rule as candidates to be executed.

A rational agent must have beliefs correlated with states of its environment. It must have dispositions to like or dislike its situation, and it must have cognitive mechanisms to engage in activity that has a tendency to change its immediate environment so that it acquires new beliefs that interact with its dispositions in such a way that it likes its situation better.

The cycle in a production system provides a primitive model of this cognitive behavior.

The Adam and Eve Example

Kowalski's "Adam and Eve" example (Computational Logic and Human Thinking, 117) illustrates the idea that underlies production systems.

The initial facts in "Adam and Eve" example are:

Eve mother of Cain
Eve mother of Able
Adam father of Cain
Adam father of Able
Cain father of Enoch
Enoch father of Irad

The production rules in the "Adam and Eve" are:

If X mother of Y, then add X ancestor of Y.
If X father of Y, then add X ancestor of Y.
If X ancestor of Y, and Y ancestor of Z, then add X ancestor of Z.

If the rules are applied until no new facts can be added, then all the "consequences" of the initial facts will have been added. This behavior in production systems may be understood as a forerunner of forward reasoning from conditionals. The condition-action rules, without the add action, are indistinguishable from the conditionals in the logic programming/agent model.

The Goal Stack in Production Systems

In addition to "drawing consequences," production systems approximate "practical reasoning."

To make this work, there must be a way represent both "ordinary facts" (observations or consequences of observations) and "goal facts." Production systems accommodate these two kinds of "facts" by employing different structures for ordinary facts and for goal facts.

Goals and subgoals are stored in a stack. Goal-reduction is implemented in productions systems, not by backward chaining, but by forward chaining with production rules roughly of the form

If goal G and conditions C, then add H as a subgoal

Only the top goal can trigger a production rule. When a goal is reduced to a subgoal, the new subgoal is pushed onto the stack. When a goal is "solved," it is popped off the stack.

Production Rules for the Fox and the Crow

Even for relatively simple cognitive behavior, the production rules will be complicated. Here are some of the rules necessary for the fox and the crow example.

If your goal (at the top of the stack) is to have an object,
and you are not near the object,
then add (by pushing on to the stack) the goal to be near the object.

If your goal (at the top of the stack) is to be near an object,
and you are near the object, then delete the goal (by popping it off the stack).

If your goal (at the top of the stack) is to have an object,
and you are near the object,
then pick up the object and delete the goal (by popping it off the stack).

Achievement Goals and Maintenance Goals

As part of a theory of artificial intelligence, logic programming incorporates the primary insight of production systems: the observation-thought-decision-action cycle. Instead of production rules, the logic programming model makes use of conditionals. In addition, to handle goals as opposed to queries, the logic programming/agent model incorporates achievement and maintenance goals.

Maintenance goals function to maintain a relationship with the changing state of the world. In the logic programming/agent model, a maintenance goal takes the form of a conditional

if condition, then achievement goal

When the agent has a belief whose propositional content matches the condition in the antecedent of the maintenance goal, the achievement goal in the consequent is triggered.

Maintenance goals encode relationships with the world that an agent is designed or has evolved to maintain through its behavior. If the agent is aware that the relationship fails, the maintenance goal issues in an achievement goal. The achievement goal triggers behavior to achieve the achievement goal. This behavior is an effort to reinstate the relationship with the world.

The states in the world that matter to the life of the agent are encoded in the antecedents of maintenance goals. Consider hunger in animals. When animals are hungry, they tend to move to find food and eat it. In terms of the logic programming/agent model, the conditional

"if I am hungry, I find food and eat it"

is instantiated in the animal so that it functions as a maintenance goal. When the animal registers the truth of the antecedent, the content of the consequent is activated as an achievement goal. The achievement goal moves the animal to take steps to find food and eat it.

To understand more clearly what a maintenance goal is, it is helpful to think about desire in terms of the (ancient Platonic) model of desire in terms of depletion and replenishment. The object of the desire replenishes and thus maintains the agent. (In the example of the fox and the crow, finding and eating food replenishes the fox.) The desire (for food in the fox and crow example) arises because the agent is depleted in a certain way (the fox is hungry). The maintenance goal links the depletion (hunger) to the condition that replenishes (eating) the depletion. In this way, maintenance goals function to focus the agent on the information it needs. Not not everything in the world matters to the life of the agent. The agent does not just form beliefs about the world. It forms beliefs in connection with whether the conditions have been met to trigger a maintenance goal and thus to introduce a goal that the agent needs to achieve.

In the logic programming/agent model, maintenance goals, achievement goals, and the beliefs function in different ways. Beliefs trigger maintenance goals. When maintenance goals are triggered, they issue in achievement goals. These achievement goals function as queries. These queries initiate the construction of plans to bring about achievement goals.

These functional differences between maintenance goals, achievement goals, and the beliefs in the KB are not reflected in their logical form within the logic programming/agent model. An achievement goal, for example, describes a possible state of the world. So does a belief. Their logical form is the same. The difference between them is in how they function in the model.

What the Fox is Thinking

Against this background, we can return to the example of the fox and the crow to see in more detail how the logic programming/agent model of intelligence works. The fox's goal of having the cheese ("I have the cheese") is a goal of achieving (bringing about or causing) some future state of the world in order to maintain a certain existence, namely, not being hungry.

Suppose the fox has the following maintenance goal:

If I am hungry, then I have food and I eat food.

Suppose the fox has the following beliefs:

I have X if I am near X and I pick up X.
I am near the cheese if the crow has the cheese and the crow sings.
The crow sings if I praise the crow.
Cheese is food.

In this first cycle, "observation" covers a process about which much more needs to be said. Here, however, we simply assume that observations are a way of forming beliefs and that it is possible to observe pretty much any positive feature of the world. In addition, we have not said anything about what causes the switch from observation to checking to see if maintenance goals are triggered. Further, we have not said how the agent acquires maintenance goals. These problems need to be solved in a complete model of intelligence. In the first iteration of the cycle, the fox has the observation

I am hungry

From the maintenance goal, by forward chaining, the fox gets the the achievement goal

I have food and I eat food

In the second iteration, the cognition is not forward chaining. It is backward chaining. This is the one thing the model does well. The fox is trying to figure out how to achieve the newly introduced goal. He uses backward reasoning, which introduces new subgoals

I am near food and I pick up food and I eat the food

The logic programming/agent modl contains no mechanism to control the switch from backward chaining to observation. This is a problem that needs to be solved. In the third iteration, suppose that the fox turns once again to making observations. Suppose that the fox observes that

The crow has cheese

The logic programming/agent model contains no mechanism for determining when to reason forward from observations. This is another problem that needs a solution. Now the fox can either reason forward from the observation or reason backward from the subgoals. What is it rational for him to do? Reasoning forward from observations would seem to have higher priority, since their might be an emergency or an opportunity that should not be missed. So the fox can reason to the new belief

I am near the cheese if the crow sings

In the fourth iteration, the fox can use his belief that "Cheese is food" to transform the subgoal "I am near food" to the subgoal "I am near cheese." (This also transforms the other subgoals to "I pick up cheese" and "I eat cheese.") Now the fox can use backward chaining to match the subgoal "I am near cheese" to the head of the belief "I am near the cheese if the crow sings" to introduce the new subgoal "The crow sings." So the fox has thought his way to the following subgoals:

The crow sings and I pick up the cheese and I eat the cheese

In the fifth iteration, the fox uses backward reasoning to reduce "The crow sings" to "I praise the crow." Now the fox has a "plan of action":

I praise the crow and I pick up the cheese and I eat the cheese

The model does not specify the mechanism. The fox forms the "intention" to execute the plan, beginning with the first action.

The model does not specify the mechanism. In the sixth iteration, the fox determines whether the action is executed successfully. The fox observes whether it is praising the crow. Suppose the fox sees itself praising the crow. Now by forward reasoning the plan of action becomes

I pick up the cheese and I eat the cheese

To continue with plan execution, the fox takes the next action from the plan.

What we have Accomplished in this Lecture

The model of the intelligence of a rational agent although more complete is still too simple to be realistic. It also suffers from certain problems, but it is a start. Rational agents have beliefs about the world and reason in terms of these beliefs to decide what to do. The logic program is a symbolic structure that represents the agent's beliefs. When these beliefs show the agent the world is a certain way, the beliefs trigger a maintenance goal. The maintenance goal issues in an achievement goal. The achievement goal functions as a query. This query initiates plan construction.




move on g back