Speculation about robot morality is almost as old as the concept of a robot itself. Asimov’s three laws of robotics provide an early and well-discussed example of moral rules robots should observe. Despite the widespread influence of the three laws of robotics and their role in shaping visions of future robo-dense worlds, these laws have been neglected as futuristic by hands-on roboticists who have been busy with addressing less abstract questions about robots’ behaviour concerning space locomotion, obstacles avoidance, automatic learning, among others. Between morality and function lies a vast gap. When robots enter our everyday lives they will have to observe social and legal norms. For example, social robots in the hospitals are expected to observe social rules (they should not interrupt a mourning family) and robotic dust cleaners scouring the streets for waste as well as automated cars will have to observe traffic regulation. In this article we elaborate on the various ways in which robotic behaviour is regulated. We distinguish between imposing regulations on robots, imposing regulation by robots, and imposing regulation in robots. In doing this, we distinguish regulation that aims at influencing human behaviour and regulation whose scope is robots’ behaviour. We claim that the artificial agency of robots requires designers and regulators to look at the question of how to regulate robots’ behaviour in a way that renders it compliant with legal norms. Regulation by design offers a means for this. We further explore this idea through the example of automated cars.
- robots, techno-regulation, code, artificial intelligence, value sensitive design