PROHIBITIONS AND PROSPECTIVE LOGIC PROGRAMMING
Computational Logic and Human Thinking, 12
A maintenance goal provides the agent with achievement goals. A prohibition rules out certain plans to achieve these goals. In this way, prohibitions constrain what an agent is permitted to do.
The Question of What is Right and Wrong
In machine ethics, the problem once again is not first and foremost a straightforward problem in engineering. The problem is philosophical. It is to know what is right wrong and thus to know what the correct prohibitions are.
Consider the following variation on the runaway trolley problem.
A runaway trolley is about to run over and kill five people. You are a bystander standing on a footbridge over the track. The only way to stop the train and save the five people is to throw a heavy object in front of the train. The only heavy object available is a large man standing next to you. Should you do it? Should you sacrifice one life to save five lives?
In the context of the logic programming/agent model, this question reduces to a question about which plan to pursue. The achievement goal is to respond to the danger. There are two ways to respond to it. The agent can do nothing or push the large man in front of the oncoming train. Which is permitted?
The terms of moral appraisal (right, wrong, obligatory) are interdefinable. It is possible to take one as given and to define the remaining two. Here is how this works if the term right is given. In this case,wrong is defined as what is not right (an act is wrong iff it is not right) and obligatory is defined as what it is not right not to do (an act is obligatory iff it is not right not to do it.)
This interdefinability of the terms of appraisal can make it tempting to try to give an analysis of right without referring to the other terms of moral appraisal.
The theory in ethics known as utilitarianism may be understood as an attempt to give such an analysis of right. Utilitarianism makes what is and is not permitted depend on the consequences of what the agent can do. In its simplest form, utilitarianism defines right in terms of utility: an act is right if, and only if, no alternative has a higher utility. Hedonistic utilitarianism defines utility in terms of pleasure. So the idea is that an act is right, or permitted, just in case nothing else the agent can do would bring about more pleasure in the world.
In the case of the runaway trolley, utilitarianism seems to give the result that the agent is prohibited from doing anything other than push the man in front of the train. Saving the five lives appears to be what brings about the most pleasure in the world. The other option, doing nothing, is prohibited because it does not bring about as much pleasure.
This result is paradoxical. Many people would say that it is wrong to push the man in front of the train.
This might be taken just to show that utilitarianism is false. Still, it highlights the problem. Machine ethics needs a theory of what is right and wrong, and it is not clear there what that theory is.
Prohibitions in the Logic Programming/Agent Model of Intelligence
Prohibitions are a little like maintenance goals. Beliefs can trigger maintenance goals. These beliefs may be produced directly by observation or by forwarding chaining on the basis of observation. In the example below, the items in the plan of action function like observations. The agent chains forward from the items in the plan to consequences to determine if any of these consequences trigger a prohibition. If they do, the agent abandons the plan because he is prohibited from executing this plan to satisfy the achievement goal.
To construct an example, the agent must have general beliefs, beliefs about the current situation, a maintenance goal, the ability to engage in backward and forward chaining, and a prohibition.
The Agent has General Beliefs about the World
a person is killed if the person is
in danger of being killed by a train
and no one saves the person from being killed by the train.
a person X kills a person Y if X throws Y in front of a train.
a person is in danger of being killed by a train
if the person is on a railtrack
and a train is speeding along the railtrack
and the person is unable to escape from the railtrack.
a person saves a person from being killed by a train
if the person stops the train.
a person stops a train
if the person places a heavy object in front of the train.
a person places a heavy object in front of the train
if the heavy object is next to the person
and the train is on a railtrack
and the person is within throwing distance of the object to the railtrack
and the person throws the object in front of the train.
The Agents has Beliefs about the Current Situation
five people are on the
a train is speeding along the railtrack.
the five people are unable to escape from the railtrack.
john is next to me.
john is an innocent bystander.
john is a heavy object.
I am within throwing distance of john to the railtrack.
The Agent has a Maintenance Goal
if a person is in danger of being
killed by a train
then I respond to the danger of the person being killed by the train.
Two Beliefs support the Maintenance Goal
I respond to the danger of a person
being killed by the train
if I ignore the danger.
I respond to the danger of a person being killed by the train
if I save the person from being killed by the train.
The Agent has a Prohibition
If I kill a person and the person is an innocent bystander, then false
A Sketch of the Reasoning that leads to Action
Making certain assumptions for simplicity, forward chaining yields the belief that
five people are in danger of being killed by the train
This belief triggers the maintenance goal to introduce the achievement goal
I respond to the danger of the five people being killed by the train
Backward chaining provides two alternative subgoals
I ignore the danger
I save the five people from being killed by the train.
Thinking about the second subgoal produces the plan of action
I throw John onto the railtrack in front of the train
The question now is whether that is a good plan. To determine the answer, the agent chains forward (or prospectively) to consequences. This is where the prohibition comes into play. When the agent chains forward (or prospectively) to consequences of the plan, the prohibition rules the plan out because the plan has the unacceptable consequence (represented as false). If the agent were to execute the plan, he would kill an innocent bystander. That would be wrong. So he does not do it. Instead, in the example, he ignores the danger.