VIRTUAL CRIMINAL RESPONSIBILITY SSRN.pdf - VIRTUAL CRIMINAL...

This preview shows page 1 - 2 out of 22 pages.

Electronic copy available at: 6 VIRTUAL CRIMINAL RESPONSIBILITY GABRIEL HALLEVY * I. I NTRODUCTION In 1981, a 37-year-old Japanese employee of a motorcycle factory was killed by an artificial-intelligence robot working near him. 1 The robot erroneously identified the employee as a threat to its mission, and calculated that the most efficient way to eliminate this threat was by pushing him into an adjacent operating machine. Using its very powerful hydraulic arm, the robot smashed the surprised worker into the operating machine, killing him instantly, and then resumed its duties with no one to interfere with its mission. Unfortunately, this is not science fiction, and the legal question is: Who is to be held liable for this killing? The technological world is changing rapidly. More and more simple human activities are being replaced by robots and computers. As long as humanity used computers as mere tools, there was no real difference between computers and screwdrivers, cars or telephones. When computers became sophisticated, we used to say that computers "think" for us. The problem began when computers evolved from “thinking” machines (machines that were programmed to perform defined thought processes/computing) into thinking machines (without quotation marks) – or Artificial Intelligence (AI). Artificial Intelligence research began in the early 1950s. 2 Since then, AI entities have become an integral part of modern human life, functioning much more sophisticatedly than other daily tools. Could they become dangerous? In 1950, Isaac Asimov set down three fundamental laws of robotics in his science fiction masterpiece “I, Robot”:1. A robot may not injure a human being or, through inaction, allow a human being to come to harm; 2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law. 3. A robot must protect its own existence, as long as such protection does not conflict with the First or Second Laws. 3 These three fundamental laws are obviously contradictory. 4 What if a man orders a robot to hurt another person for the own good of the other person? What if the robot is in police service and the commander of the mission orders it to arrest a suspect and the suspect resists arrest? Or what if the robot is in medical service and is ordered to perform a surgical procedure on a patient, the patient objects, but the medical doctor insists that the procedure is for the patient’s own good, and repeats the order to the robot? Besides, Asimov's fundamental laws of robotics relate only to robots. AI software not installed * Faculty of Law, Ono Academic College. 1 The facts above are based on the overview in Yueh-Hsuan Weng, Chien-Hsun Chen and Chuen-Tsai Sun, Towards the Human-Robot Co-Existence Society: On Safety Intelligence for Next Generation Robots , 1 I NT . J. S OC . R OBOT 267, 273 (2009).

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture