PROHIBITIONS AND PROSPECTIVE LOGIC PROGRAMMING
Introduction to Machine Ethics
• Computational Logic and Human Thinking. Chapter 8 (135-139), Chapter 12 (171-181)
A maintenance goal issues in achievement goals. A prohibition rules out certain plans to achieve these goals.
"A prohibition can be regarded as a special kind of maintenance goal whose conclusion is literally false. ... [In this way, p]rohibitions are constraints on the actions you can perform" (Computational Logic and Human Thinking, 136).
The Question of What is Right and Wrong
In machine ethics, the problem once again is not first and foremost a straightforward problem in engineering. The problem is philosophical. It is to know what is right wrong and thus to know what the correct prohibitions are.
As an example, consider the following variation on the "runaway trolley" problem (Computational Logic and Human Thinking, 171).
A runaway trolley is about to run over and kill five people. You are a bystander standing on a footbridge over the track. The only way to stop the train and save the five people is to throw a heavy object in front of the train. The only heavy object available is a large man standing next to you. Should you do it? Should you sacrifice one life to save five lives?
In the context of the logic programming/agent model, this question reduces to a question about which plan to pursue. The achievement goal is to act in response to the danger, and there are two ways the agent can act. The agent can do nothing or push the large man in front of the oncoming train. Which is the agent permitted to do?
The terms of moral appraisal are interdefinable
The terms of moral appraisal (right, wrong, obligatory) are interdefinable. It is possible to take one as given and to define the remaining two. Here is how this works if the term right is given. In this case,wrong is defined as what is not right (an act is wrong iff it is not right) and obligatory is defined as what it is not right not to do (an act is obligatory iff it is not right not to do it.)
(Other terms of appraisal are synonyms of the basic three. So, for example, permitted and right are synonyms. So are forbidden, prohibited, and wrong.)
This interdefinability of the terms of appraisal presents a problem. If there is a question about what is permitted in a given situation, to be told that one is permitted to do whatever is not wrong is not informative. To solve this problem, it is tempting to look for an analysis of one of the terms that is informative and thus breaks out of the circle of definitions.
Understanding what is right in terms of utility
The theory in ethics known as utilitarianism may be understood as an attempt to give such an analysis. Utilitarianism makes what is right to depend on the consequences of the various action that agent can perform in the circumstances. Utilitarianism defines right in terms of utility: an act is right iff no alternative action in the circumstances has a higher utility. Hedonistic utilitarianism defines utility in terms of pleasure. In this case, an act is right iff nothing alternative action the agent can perform would bring about more pleasure in the world.
In the case of the runaway trolley, hedonistic utilitarianism gives the result that the agent is prohibited from doing anything other than push the man in front of the train. Saving the five lives is what brings about the most pleasure in the world. The other option, doing nothing, is prohibited because it does not bring about as much pleasure.
This result is paradoxical. Many people would say that it is wrong (not right) to push the man in front of the train.
(Note that changing the example does nothing to defend hedonistic utilitarianism. We can, of course, change the example. We can consider a different example that is like the original but with the addition that the man, if he is not pushed in front of the train, will go on to do something so momentous that it brings more pleasure to the world than is lost with the lives of the five. About this example, hedonistic utilitarianism says to let the five die. This result, however, does not nothing to change what hedonistic utilitarianism says about the original example.)
Machine ethics presupposes knowledge of what is prohibited
On the basis of the runaway trolley example, one might conclude that utilitarianism is false. The problem, though, is clear. We need to know what is prohibited in what circumstances.
Prohibitions in the Logic Programming/Agent Model
Prohibitions are a little like maintenance goals. Beliefs can trigger maintenance goals. These beliefs may be produced directly by observation or by reasoning forward on the basis of observations. In the example below, the items in the plan of action function like observations. The agent reasons forward from the items in the plan to consequences to determine if any of these consequences trigger a prohibition. If they do, the agent abandons the plan because he is prohibited from executing this plan to satisfy the achievement goal.
The Agent in the Trolley example
In the example, the agent has general beliefs, beliefs about the current situation, a maintenance goal, the ability to engage in backward and forward reasoning, and a prohibition.
General beliefs about the world:
a person is killed if the person is
in danger of being killed by a train
and no one saves the person from being killed by the train.
a person X kills a person Y if X throws Y in front of a train.
a person is in danger of being killed by a train
if the person is on a railtrack
and a train is speeding along the railtrack
and the person is unable to escape from the railtrack.
a person saves a person from being killed by a train
if the person stops the train.
a person stops a train
if the person places a heavy object in front of the train.
a person places a heavy object in front of the train
if the heavy object is next to the person
and the train is on a railtrack
and the person is within throwing distance of the object to the railtrack
and the person throws the object in front of the train.
Beliefs about the current situation:
five people are on the
a train is speeding along the railtrack.
the five people are unable to escape from the railtrack.
john is next to me.
john is an innocent bystander.
john is a heavy object.
I am within throwing distance of john to the railtrack.
A maintenance goal:
if a person is in danger of being
killed by a train
then I respond to the danger of the person being killed by the train.
Two beliefs support the maintenance goal:
I respond to the danger of a person
being killed by the train
if I ignore the danger.
I respond to the danger of a person being killed by the train
if I save the person from being killed by the train.
If I kill a person and the person is an innocent bystander, then false
How the agent reasons in the example
Making certain assumptions for simplicity, forward reasoning yields the belief that
five people are in danger of being killed by the train
This belief triggers the maintenance goal to introduce the achievement goal
I respond to the danger of the five people being killed by the train
Backward reasoning provides two alternative subgoals
I ignore the danger
I save the five people from being killed by the train.
Thinking about the second subgoal produces the plan of action
I throw John onto the railtrack in front of the train
The question now is whether that is a good plan. To determine the answer, the agent reasons forward (or prospectively) to consequences. This is where the prohibition comes into play. When the agent reasons forward (or prospectively) to consequences of the plan, the prohibition rules the plan out because the plan has the unacceptable consequence (represented as false). If the agent were to execute the plan, he would kill an innocent bystander. That would be wrong. So he does not do it. Instead, in the example, he ignores the danger.
The use of prohibitions in other contexts
The example shows the use of a prohibition in the context of ethics, but prohibitions have application outside of ethics. "Consider an agent who wants to bring parcels from some location A to a location B, using its truck. The distance between A and B is too large to make it without refueling, and so, in order not to end up without gas, the agent needs to stop every once in a while to refuel. The fact that the agent does not want to end up without gas, can be modeled as a maintenance goal [= what we are calling a "prohibition"]. This maintenance goal constrains the actions of the agent, as it is not supposed to drive on in order to fulfil its goal of delivering the parcels, if driving on would cause it to run out of gas" (Hindriks K.V., van Riemsdijk M.B., "Satisfying Maintenance Goals").
What we have accomplished in this lecture
We considered the place of prohibitions among the terms of moral appraisal, how prohibitions can be understood to function like maintenance goals, and how prohibitions can be added to the logic programming/agent model as way to function as a constraint on which plans an agent to translate into action.