The Fox and the Crow
Introduction to the logic programming/agent model
Computational Logic and Human Thinking, 3, 8
The Fox makes a Plan to get the Cheese
The example of the fox and the crow illustrates the logic programming/agent model of the intelligence that characterizes a rational agent. The example is primitive, but it shows some of the promise of the logic programming/agent model. In the example, the fox has "beliefs" and a "goal." He uses his beliefs to make a "plan" to get the goal.
Goals are new to the model. They take two forms: maintenance goals and achievement goals. Maintenance goals are the more fundamental of the two. They are incorporated into the model in such a way that against the background of certain beliefs they issue in achievement goals. Achievement goals in turn function as queries to the KB.
The Fox's Beliefs:
(1) x has y if x is near y and x picks up y.
(2) I am near the cheese if the crow has the cheese and the crow sings.
(3) The crow sings if I praise the crow.
(4) The crow has the cheese.
The Fox's Achievement Goal: I have the cheese.
Outline of the Reasoning Steps to get the Cheese:
1. Unify the goal and the head of (1). This produces the derived query,
I am near the cheese and I pick up the cheese.
2. Unify the first conjunct of this query with the head of (2). This produces the derived query,
The crow has the cheese and the crow sings and I pick up the cheese.
3. Unify the first conjunct of this query with the head of (4). This produces the derived query,
The crow sings and I pick up the cheese.
4. Unify the first conjunct of this query with the head of (3). This produces the derived query,
I praise the crow and I pick up the cheese.
This query may be understood as a "plan." The fox has figured out what he has to do to satisfy his goal.
The Production System Model
The logic programming/agent model evolved from the
production system model.
A production system is a collection of condition-action rules that are embedded in a cycle. The cycle repeats as follows. It reads an input fact, uses forward chaining (from conditions to actions) to match the fact with one of the conditions of a production rule, verifies the remaining conditions of the rule, and then derives the actions of the rule as candidates to be executed.
A rational agent must have beliefs correlated with states
of its environment. It must have dispositions to like or
dislike its situation, and it must have cognitive mechanisms
to engage in activity that has a tendency to change its
immediate environment so that it acquires new beliefs that
interact with its dispositions in such a way that it likes
its situation better.
The cycle in a production system provides a primitive model of this cognitive behavior.
The Adam and Eve example
Consider Kowalski's Adam and Eve example. Let the initial working memory consist in the following:
Eve mother of Cain
Eve mother of Able
Adam father of Cain
Adam father of Able
Cain father of Enoch
Enoch father of Irad
Suppose the production rules are:
If X mother of Y,
then add X ancestor of Y.
If X father of Y, then add X ancestor of Y.
If X ancestor of Y, and Y ancestor of Z, then add X ancestor of Z.
If the rules are applied until no new facts can be added, then all the "consequences" of the initial facts will have been added. This behavior in production systems may be understood as a forerunner of forward reasoning from conditionals. The condition-action rules, without the add action, are indistinguishable from the conditionals in the logic programming/agent model.
The goal stack in production systems
Production systems approximate practical reasoning with
The main obstacle is that working memory must now contain both "facts" (observations or consequences of observations) and "goal facts."
To accommodate these two kinds of "facts," working memory now employs a different structure for ordinary facts and for goal facts. Goals and subgoals are stored in a stack. Goal-reduction is implemented in productions systems, not by backward chaining as in logic programming, but by forward chaining with rules of the form
If goal G and conditions C, then add H as a subgoal
Only the top goal can trigger a
production rule. When a goal is reduced to a subgoal, the new
subgoal is pushed onto the stack. When a goal is "solved," it
is popped off the stack.
Here are some examples production rules stated in terms of a goal stack for the story of the fox and the crow:
If your goal (at the top of the stack)
is to have an object,
and you are not near the object,
then add (by pushing on to the stack) the goal to be near the object.
If your goal (at the top of the stack) is to be near an object,
and you are near the object, then delete the goal (by popping it off the stack).
If your goal (at the top of the stack) is to have an object,
and you are near the object,
then pick up the object and delete the goal (by popping it off the stack).
Achievement Goals and Maintenance Goals
As part of a theory of artificial intelligence, logic programming incorporates the primary insight of production systems: the observation-thought-decision-action cycle. Instead of production-action rules, the logic programming model makes use of conditionals. In addition, to handle goals as opposed to queries, it incorporates achievement goals and maintenance goals.
Maintenance goals are to maintain a relationship with the changing state of the world. A maintenance goal takes the form of a conditional
if condition, then achievement goal
When the agent has a belief that matches the condition in the antecedent of the maintenance goal, the achievement goal is triggered.
Maintenance goals encode relationships with the world that an agent is designed or has evolved to maintain through its behavior. If the relationship fails, the maintenance goal issues in an achievement goal. The achievement goal triggers behavior to achieve this goal. This behavior, in turn, reinstates the relationship with the world.
The states in the world that really matter to the life of the agent are encoded in the antecedents of maintenance goals. Consider hunger in animals. When animals are hungry, they tend to move to find food and eat it. In terms of the logic programming/agent model of rational intelligence, the conditional
"if I am hungry, I find food and eat it"
is instantiated in the animal so that it functions as a maintenance goal. When the animal registers the truth of the antecedent, the content of the consequent is activated as an achievement goal. This, in turn, moves the animal to find food and eat it. In this way, maintenance goals function to focus the information the agent seeks since not everything in the world matters to the life of the agent. The agent does not just form beliefs about the world. It needs to know whether the conditions have been met to trigger a maintenance goal and thus to introduce a goal that the agent needs to achieve.
To understand a little more clearly what a maintenance goal is, it is helpful at least initially to think about desire in terms of the (ancient Platonic) model of desire in terms of depletion and replenishment. The object of the desire is what replenishes and thus maintains the agent. (In the example of the fox and the crow, finding and eating food it replenishes the fox.) The desire (for food in the fox and crow example) arises because the agent is depleted in a certain way (the fox is hungry). The maintenance goal links the depletion to the condition that replenishes the depletion.
Maintenance goals, achievement goals, and the beliefs in the KB function differently in the logic programming/agent model. Beliefs trigger maintenance goals. When maintenance goals are triggered, they issue in achievement goals. These achievement goals function as queries. These queries initiate the construction of plans to bring about achievement goals.
These functional differences between maintenance goals, achievement goals, and the beliefs in the KB are not reflected in their logical form within the logic programming/agent model. An achievement goal, e.g., describes a possible state of the world. So does a belief. Their logical form is the same. The difference between them is in how they function in the model.
What the Fox is Thinking
Against this background, we can return to the example of the fox and the crow to see in
more detail how the logic programming/agent model of intelligence works.
The fox's goal of having the cheese ("I have the cheese") is
a goal of achieving (bringing about or causing) some future
state of the world in order to maintain a certain
existence, namely, not being hungry.
Suppose the fox has the following maintenance goal:
If I am hungry, then I have food and I eat food.
Suppose the fox has the following beliefs:
I have X if I am near
X and I pick up X.
I am near the cheese if the crow has the cheese and the crow sings.
The crow sings if I praise the crow.
Cheese is food.
In the first iteration of the cycle, the fox has the observation
I am hungry
Now, by forward reasoning, the fox derives from the maintenance goal the following achievement goal
I have food and I eat food
No candidate actions are derived, so there is no decision
and no action.
In this first cycle, "observation" covers a process about which much more needs to be said. Here, however, we simply assume that observations are a way of forming beliefs and that it is possible to observe pretty much any positive feature of the world. In addition, we have not said anything about what causes the switch from observation to checking to see if maintenance goals are triggered. Further, we have not said how the agent acquires maintenance goals. These problems need to be solved in a complete model of intelligence.
In the second iteration, the cognition is not forward chaining. It is backward chaining. This is the one thing the model does well. The fox is trying to figure out how to achieve the newly introduced goal. He uses backward reasoning, which introduces new subgoals
I am near food and I pick up food and I eat the food
At this point, once again no candidate actions are derived, so there is no decision and no action.
In the third iteration, suppose that the fox turns once again to making observations. Here, again, the model needs to be supplemented. As it stands, there is no mechanism to control the switch from backward chaining to observation. Suppose that the fox observes that
The crow has cheese
Now the fox can either reason forward from the observation or reason backward from the subgoals. What is it rational for him to do? Reasoning forward from observations would seem to have higher priority, since their might be an emergency or an opportunity that should not be missed. (The model, as it stands, has no mechanism for this.) So the fox can reason to the new belief
I am near the cheese if the crow sings
In the fourth iteration, the fox can use his belief that "Cheese is food" to transform the subgoal "I am near food" to the subgoal "I am near cheese." (This also transforms the other subgoals to "I pick up cheese" and "I eat cheese.") Now the fox can use backward chaining to match the subgoal "I am near cheese" to the head of the belief "I am near the cheese if the crow sings" to introduce the new subgoal "The crow sings." So the fox has thought his way to the following subgoals:
The crow sings and I pick up the cheese and I eat the cheese
In the fifth iteration, the fox uses backward
reasoning to reduce "The crow sings" to "I praise the
Now the fox has a "plan of action":
I praise the crow and I pick up the cheese and I eat the cheese
The fox forms the "intention" to execute the plan, beginning with the first action. The model, again, does not specify the mechanism here.
In the sixth iteration, the fox determines whether the action is executed successfully. The model does not specify the mechanism, but the idea is as follows. The fox observes whether it is praising the crow. Suppose the fox sees itself praising the crow. Now by forward reasoning the plan of action becomes
I pick up the cheese and I eat the cheese
To continue with plan execution, the fox takes the next action from the plan.
What we have accomplished
At this point, our model of the intelligence of a rational agent is more complete. It is still too simple to be realistic. It also is incomplete and suffers from certain problems, but it is a start. Rational agents have beliefs about the world, and they reason in terms of these beliefs to decide what to do. The KB is a symbolic structure that represents the agent's beliefs. When these beliefs show the agent the world is not to their liking, the beliefs trigger a maintenance goal. The maintenance goal issues in an achievement goal. The achievement goal functions as a query. This query initiates plan construction.