PROHIBITIONS AND PROSPECTIVE LOGIC PROGRAMMING

Machine Ethics


Computational Logic and Human Thinking, 12


A maintenance goal provides the agent with achievement goals. A prohibition rules out certain plans to achieve these goals. In this way, prohibitions constrain what an agent is permitted to do.


The Question of What is Right

The difficult thing is to know what is right and what is wrong. Consider the following variation on the runaway trolley problem.

A runaway trolley is about to run over and kill five people. You are a bystander standing on a footbridge over the track. The only way to stop the train and save the five people is to throw a heavy object in front of the train. The only heavy object available is a large man standing next to you. Should you do it? Should you sacrifice one life to save five lives?

In the context of the logic programming/agent model, this is a question about which plans are permitted. The achievement goal is to respond to the danger. There are two ways to respond: do nothing or push the large man in front of the train. Which is permitted?

The terms of moral appraisal (right, wrong, obligatory) are interdefinable. It is possible to take one as given and to define the remaining two. Here is how this works if the term right is given. In this case,wrong is defined as what is not right (an act is wrong iff it is not right) and obligatory is defined as what it is not right not to do (an act is obligatory iff it is not right not to do it.)

This interdefinability of the terms of appraisal can make it tempting to try to give an analysis of right without referring to the other terms of moral appraisal.

The theory known as utilitarianism may be understood in this way. In its simplest form, it defines right in terms of utility: an act is right iff no alternative has a higher utility. Hedonistic utilitarianism defines utility in terms of pleasure. So the idea is that an act is right, or permitted, just in case nothing else the agent can do will bring more pleasure to the world. Given this much, it is easy to see how the trolley example may be understood so that saving five lives is the option in the circumstances that would bring more pleasure to the world.

Utilitarianism makes what is and is not permitted depend on the consequences. These consequences prohibit the agent from taking certain actions. In the case of the runaway trolley, utilitarianism seems to say you are prohibited from doing anything other than push the man in front of the train. The other options are all wrong.

This paradoxical. Many people would say that it is wrong to push the man in front of the train.

This sort of controversy about what is right is a problem. It suggests that the first problem for machine ethics is a problem in philosophy, not a problem in engineering.


Prohibitions in the Logic Programming/Agent Model of Intelligence

Prohibitions are a little like maintenance goals. Beliefs can trigger maintenance goals. These beliefs may be produced directly by observation or by forwarding chaining on the basis of observation. In the example below, the items in the plan of action function like observations in a possible future. The agent chains forward from the items in the plan to consequences to determine if any of these consequences trigger a prohibition. If they do, the agent abandons the plan because he is prohibited from executing this plan to satisfy the achievement goal.

To construct an example, the agent must have general beliefs, beliefs about the current situation, a maintenance goal, the ability to engage in backward and forward chaining, and a prohibition.


The Agent has General Beliefs about the World

a person is killed if the person is in danger of being killed by a train
and no one saves the person from being killed by the train.

a person X kills a person Y if X throws Y in front of a train.

a person is in danger of being killed by a train
if the person is on a railtrack
and a train is speeding along the railtrack
and the person is unable to escape from the railtrack.

a person saves a person from being killed by a train
if the person stops the train.

a person stops a train
if the person places a heavy object in front of the train.

a person places a heavy object in front of the train
if the heavy object is next to the person
and the train is on a railtrack
and the person is within throwing distance of the object to the railtrack
and the person throws the object in front of the train.


The Agents has Beliefs about the Current Situation

five people are on the railtrack.
a train is speeding along the railtrack.
the five people are unable to escape from the railtrack.
john is next to me.
john is an innocent bystander.
john is a heavy object.
I am within throwing distance of john to the railtrack.


The Agent has a Maintenance Goal

if a person is in danger of being killed by a train
then I respond to the danger of the person being killed by the train.


Two Beliefs support the Maintenance Goal

I respond to the danger of a person being killed by the train
if I ignore the danger.

I respond to the danger of a person being killed by the train
if I save the person from being killed by the train.


The Agent has a Prohibition

If I kill a person and the person is an innocent bystander, then false



A Sketch of the Reasoning that leads to Action

Making certain assumptions for simplicity, forward chaining yields the belief that

five people are in danger of being killed by the train

This belief triggers the maintenance goal to introduce the achievement goal

I respond to the danger of the five people being killed by the train

Backward chaining provides two alternative subgoals

I ignore the danger
I save the five people from being killed by the train.

Thinking about the second subgoal produces the plan of action

I throw John onto the railtrack in front of the train

The question now is whether that is a good plan. To determine the answer, the agent chains forward (or prospectively) to consequences. This is where the prohibition comes into play. When the agent chains forward (or prospectively) to consequences of the plan, the prohibition rules the plan out because the plan has the unacceptable consequence (represented as false). If the agent were to execute the plan, he would kill an innocent bystander. That would be wrong. So he does not do it. Instead, in the example, he ignores the danger.









move on g back