The Stanford Encyclopedia of Philosophy defines Utilitarianism: “Though there are many varieties of the view discussed, utilitarianism is generally held to be the view that the morally right action is the action that produces the most good.” A clear mathematical reason, nevertheless a sophisticated philosophical discussion what produces the most good (or in the trolley dilemma the less evil), a pure counting of individuals does not solve the problem.
Artificial Intelligence researcher and author Andrew Rosenblum created the example that the self-driving car faces the situation that a truck is approaching from the front, what surely will destroy the car and kill its passenger. The only possibility to avoid this situation is to swerve and drive the car into a group of 15 pedestrian. Purely based on Asimov’s Law (and Utilitarianism), the car would have to do the mathematics and decide to sacrifice its own passenger. This may interfere with the car’s obligation to protect its passengers and owner. Due to this, the car manufacturer may feel tempted to include a priority into the software that the car has to protected its owner, as this is the person who pays the company for the intelligent car.
What about the 1:1 situation, where the decision is to sacrifice the passenger or one pedestrian? Here the car has to decide always in favor of its passenger? A government cannot burden such a decision on the car manufacturer, programmer or the software. It is required to establish laws and guidelines, which an intelligent software has to follow, especially in such grey areas. In a near future scenario, the potential “if-then”-strings of the software must be auditable. As the discussion about the Volkswagen defeat-software and emission controls showed, software engineers are under high pressure to reach the high external and internal goals; they are tempted to find solutions to bypass the regarding controls.
Today chip-tuning is already used to change the management of the engine and find additional horse-power. This is in most cases legal, but liberates the car manufacturer from its guarantee. When self-driving cars are a relevant market, it is a question of time, when external programmers will offer software to ensure higher safety for their owners, programmed preference for the passenger against the pedestrians.
Governments and car manufacturers are required to find solutions how to avoid these dilemmas, via law, but also technical protections against non-approved software. Similar to today’s computer viruses, it will be a continuous competition between new viruses and the anti-virus industry. Reprogramming an autonomous car to not follow its default-setting, but automatically prefer its passengers has to be judged as a crime, as the intention matters and not the execution. Without waiting for the scenario, the car owner decided about the preferences. The car would have no other choice than to prefer its passengers and to decide against the pedestrians. If the situation would occur, the crime would be already decided. This is aligned to Donald Cressey’s “Fraud Triangle” (Rationalization, Pressure and Opportunity let to reprogramming of the AI, the action would later executed in the defined scenario) and Philip K. Dick’s short-story “The Minority Report”.
To reduce this temptation (as part of “pressure”), autonomous vehicles could be forbidden for individuals to obsess, but sorely offered as part of the public transportation. As passengers do not know which vehicle they would use next, it would not make sense to hack single units to change the default settings. As such processes could get bypassed, physical and cyber protection would additionally make the life of potential hackers as difficult as possible. A fleet of autonomous vehicles enables to synchronize the single units, to maximize Utilitarianism for society, including the development of an hybrid system of autonomous buses, which could split up into single units for the “last mile”.
Utilitarianism raises the question of values. The answers may depend on the individual and also the society. In average, individuals prefer dogs over cats, nevertheless in Russia the opposite is the case, 33% of households own at least one cat, but only 26% at least one dog. For the programming of the autonomous car this could mean that the global default would be to prefer the dog and sacrifice the cat in the case of an accident. For Russian clients, the programming would have to be different.
Another aspect is that the passengers using a car are aware of causing a risk to others based on the vehicle’s movement. Due to this, based on ethics, a car might have do decide always against its passengers and protect the pedestrian.
Even if humans have preferences, in a scenario of an accident, the decision to pull the steering wheel to the left or right had to be done in less than a second. It is the result of a reflex, less of a decision-making process. In opposite to this, a self-driving car, may decide faster than the human driver, not only based on the speed of the decision itself, but also as it may detect and understand the situation earlier. So what is a reflex for a human, may be the result of the decision-making process for the autonomous car. Nevertheless the Massachusetts Institute of Technology created the website “Moral Machine”, where users can understand their own morale and values for the case of such an accident.
What about the case of armored transports? Should the AI be programmed to ignore potential road-blocks to reduce the risk of robbery, even if this would jeopardize human life? If not, a potential criminal could stay him- or herself in the way of the transport and it would have to stop. An opportunity for gangs to stop such transports. If the AI would ignore a person in the way, it would be programmed to kill. How could it distinguish between a criminal on the road, in opposite to an innocent pedestrian?
To relate the example to governments, what about autonomous water cannons to stop demonstrations? In democratic countries these machines would have to stop if a protester stands in the way. This until police forces would take the person away. In more autocratic systems, the government may require a different coding.