The Fox and the Crow

Introduction to the logic programming/agent model


Computational Logic and Human Thinking, 3, 8


The Crow makes a Plan to get the Cheese

The story of the fox and the crow illustrates the logic programming/agent model of intelligence. In this story, the fox has a "goal" and uses backward chaining to figure out how to achieve it.

(The example is primitive, but it shows some of the promise of the logic programming/agent model for understanding intelligence.)

Beliefs:

(1) x has y if x is near y and x picks up y.

(2) I am near the cheese if the crow has the cheese and the crow sings.

(3) The crow sings if I praise the crow.

(4) The crow has the cheese.


Goal:

I have the cheese.


Reasoning steps:

1. Unify the goal and the head of (1). This produces the derived query,

I am near the cheese and I pick up the cheese.

2. Unify the first conjunct of this query with the head of (2). This produces the derived query,

The crow has the cheese and the crow sings and I pick up the cheese.

3. Unify the first conjunct of this query with the head of (4). This produces the derived query,

The crow sings and I pick up the cheese.

4. Unify the first conjunct of this query with the head of (3). This produces the derived query,

I praise the crow and I pick up the cheese.

This query may be understood as a "plan." The fox has figured out what he has to do to satisfy his goal.



The Production System Model

The the logic programming/agent model evolved from the production system model.

A production system is a collection of condition-action rules that are embedded in a cycle. The cycle repeats as follows. It reads an input fact, uses forward chaining (from conditions to actions) to match the fact with one of the conditions of a production rule, verifies the remaining conditions of the rule, and then derives the actions of the rule as candidates to be executed.

A rational agent must have beliefs correlated with states of its environment. It must have dispositions to like or dislike its situation, and it must have cognitive mechanisms to engage in activity that has a tendency to change its immediate environment so that it acquires new beliefs that interact with its dispositions in such a way that it likes its situation better.

The cycle in a production system provides a primitive model of this cognitive behavior.

Kowalski builds the steps into the name "observation-thought-decision-action cycle." In intelligent behavior, an agent observes, thinks, decides, and acts.


Consider Kowalski's Adam and Eve example. Let the initial working memory consist in the following:

Eve mother of Cain
Eve mother of Able
Adam father of Cain
Adam father of Able
Cain father of Enoch
Enoch father of Irad

Suppose the production rules are:

If X mother of Y, then add X ancestor of Y.
If X father of Y, then add X ancestor of Y.
If X ancestor of Y, and Y ancestor of Z, then add X ancestor of Z.

If the rules are applied to the memory until no new facts can be added, then all the "consequences" of the initial facts will have been added. This behavior in production systems may be understood as a forerunner of forward reasoning from conditionals. The condition-action rules, without the add action, are indistinguishable from forward reasoning.


Production systems approximate practical reasoning with goals.

The main obstacle is that working memory must now contain both facts (observations or consequences of observations) and "goal facts."

To accommodate these two kinds of "facts," working memory now employs a different structure for ordinary facts and for goal facts. Goals and subgoals are stored in a stack. Goal-reduction is implemented in productions systems, not by backward chaining as in logic programming, but by forward chaining with rules of the form

If goal G and conditions C, then add H as a subgoal

Only the top goal can trigger a production rule. When a goal is reduced to a subgoal, the new subgoal is pushed onto the stack. When a goal is "solved," it is popped off the stack.


Here are some examples production rules stated in terms of a goal stack for the story of the fox and the crow:

If your goal (at the top of the stack) is to have an object,
and you are not near the object,
then add (by pushing on to the stack) the goal to be near the object.

If your goal (at the top of the stack) is to be near an object,
and you are near the object, then delete the goal (by popping it off the stack).

If your goal (at the top of the stack) is to have an object,
and you are near the object,
then pick up the object and delete the goal (by popping it off the stack).



Achievement Goals and Maintenance Goals

As part of a theory of artificial intelligence, logic programming incorporates the primary insight of production systems: the observation-thought-decision-action cycle. Instead of production-action rules, the logic programming model makes use of logical conditionals. In addition, to handle goals as opposed to queries, it incorporates achievement goals and maintenance goals.

Maintenance goals are to maintain a relationship with the changing state of the world. A maintenance goal takes the form of a conditional

if condition, then achievement goal

When the agent has a belief that matches the condition in the antecedent of the maintenance goal, the achievement goal is triggered.

Maintenance goals encode relationships with the world that an agent is designed or has evolved to maintain through its behavior. If the relationship fails, the maintenance goal issues in an achievement goal. The achievement goal triggers behavior to achieve this goal. This behavior, in turn, reinstates the relationship with the world.

The conditions that really matter to the life of the agent are encoded in the antecedents of maintenance goals. Consider hunger in animals. When animals are hungry, they tend to move to find food and eat it. In terms of the logic programming/agent model, the conditional

"if I am hungry, I find food and eat it"

is instantiated in the animal so that it functions as a maintenance goal. When the animal registers the truth of the antecedent, the content of the consequent is activated as an achievement goal. This, in turn, moves the animal to find food and eat it. In this way, maintenance goals function as a kind of a filter on incoming information since not everything in the world matters to the life of the agent.

In the logic programming/agent model, epistemic cognition is interest driven. Maintenance goals encode the initial interests. The agent needs to know whether the conditions have been met to trigger a maintenance goal and thus to introduce a goal that the agent needs to achieve. Observation (which is epistemic cognition) is the basic way to determine whether to trigger a maintenance goal. Once the agent has beliefs that trigger a maintenance goal, and thus has an achievement goal, practical cognition returns to epistemic cognition for a plan of action to achieve the goal.

Achievement goals are objects of desire. Desire is for something: namely, the goal. To understand what a maintenance goal is, it is helpful to think about desire in terms of the (ancient Platonic) model of desire as depletion and replenishment. The object of the desire is what replenishes. (In the example of the fox and the crow, food is what replenishes the fox.) The desire arises because the agent is depleted in a certain way. (In the example of the fox and the crow, the fox is hungry.) The maintenance goal links the depletion and the replenishment.

Now return to the example of the fox and the crow to see in more detail how the logic programming/agent model of intelligence works. The fox's goal of having the cheese ("I have the cheese") is a goal of achieving (bringing about or causing) some future state of the world in order to maintain a certain existence, namely, not being hungry.

Suppose the fox has the following maintenance goal:

If I am hungry, then I have food and I eat food.

Suppose the fox has the following beliefs:

I have X if I am near X and I pick up X.
I am near the cheese if the crow has the cheese and the crow sings.
The crow sings if I praise the crow.
Cheese is food.


In the first iteration of the cycle, the fox has the observation

I am hungry

Now, by forward reasoning, the fox derives from the maintenance goal the following achievement goal

I have food and I eat food

No candidate actions are derived, so there is no decision and no action.

In this first cycle, "observation" covers a process about which much more needs to be said. Here, however, we simply assume that observations are a way of forming beliefs and that it is possible to observe pretty much any positive feature of the world. In addition, we have not said anything about what causes the switch from observation to checking to see if maintenance goals are triggered. Further, we have not said how the agent acquires maintenance goals. These problems need to be solved in a complete model of intelligence.


In the second iteration, the cognition is not forward chaining. It is backward chaining. This is the one thing the model does well. The fox is trying to figure out how to achieve the newly introduced goal. He uses backward reasoning, which introduces new subgoals

I am near food and I pick up food and I eat the food

At this point, once again no candidate actions are derived, so there is no decision and no action.


In the third iteration, suppose that the fox turns once again to making observations. Here, again, the model needs to be supplemented. As it stands, there is no mechanism to control the switch from backward chaining to observation. Suppose that the fox observes that

The crow has cheese

Now the fox can either reason forward from the observation or reason backward from the subgoals. What is it rational for him to do? Reasoning forward from observations would seem to have higher priority, since their might be an emergency or an opportunity that should not be missed. (The model, as it stands, has no mechanism for this.) So the fox can reason to the new belief

I am near the cheese if the crow sings


In the fourth iteration, the fox can use his belief that "Cheese is food" to transform the subgoal "I am near food" to the subgoal "I am near cheese." (This also transforms the other subgoals to "I pick up cheese" and "I eat cheese.") Now the fox can use backward chaining to match the subgoal "I am near cheese" to the head of the belief "I am near the cheese if the crow sings" to introduce the new subgoal "The crow sings." So the fox has thought his way to the following subgoals:

The crow sings and I pick up the cheese and I eat the cheese


In the fifth iteration, the fox uses backward reasoning to reduce "The crow sings" to "I praise the crow."

Now the fox has a "plan of action":

I praise the crow and I pick up the cheese and I eat the cheese

The fox forms the "intention" to execute the plan, beginning with the first action. The model, again, does not specify the mechanism here.


In the sixth iteration, the fox determines whether the action is executed successfully. The model does not specify the mechanism, but the idea is as follows. The fox observes whether it is praising the crow. Suppose the fox sees itself praising the crow. Now by forward reasoning the plan of action becomes

I pick up the cheese and I eat the cheese

To continue with plan execution, the fox takes the next action from the plan.



The Functional Difference between Cognitive States

One important point to notice in all this is the functional difference between maintenance goals, achievement goals, and the beliefs in the knowledge base (KB). Beliefs trigger maintenance goals. When maintenance goals are triggered, they issue in achievement goals. Beliefs function in the construction of plans to bring about achievement goals. These functional differences are not reflected in the logical form. An achievement goal describes a possible state of the world. So does a belief. Their logical form is the same. The difference is in how they function in the model.







move on g back