top of page
  • H. Oberoi (law student)

When Robots Kill: Is the Criminal Code ready for the Age of AI?

On September 10, 2018 the documentary section of the forty-third Toronto International Film Festival, opened with the world-premiere of Maxim Pozdorovkin’s film: The Truth About Killer Robots. In an interview with a journalist from The Guardian newspaper, Pozdorovkin mentioned that the inspiration for making the documentary was a news article he read on how a robot killed a worker at an automobile assembly line in Germany.1 While much of the film is concerned with the impact automation will have on our lives, particularly the dignity of human labour and loss of meaningful work, in parts Pozdorovkin returns to the topic of robotic killers.

One powerful section of the documentary narrates how the Texas police force goes after a mass shooter, Micah Johnson. A war veteran from the long Afghan war, Johnson killed five police officers and injured nine civilians in Dallas on July 7, 2016. After police negotiations with Johnson failed, the police chief initially thought of deploying a sniper to counter the mass shooter but finally decided that the sniper alone may not succeed and thus deployed a bomb disposal remote control vehicle, Andros MarkV-A1, to counter Johnson.2 As soon as the bomb disposal unit was deployed in the standoff, Johnson was killed. According to legal scholars this was the first time in the history of policing in the United States that a robot was used to kill a suspect.3 While technically in the Johnson case the actual killing was carried out by a robot armed with an explosive, the robot was still under strict human supervision. Its lethal capabilities could only be activated through a human chain of command.

However, recent advances in Artificial Intelligence (AI) are creating an entirely new generation of robots. This new generation of robots powered by algorithms and novel data sets make them capable of self-learning. In other words, we are on the cusp of a fundamental technological transformation and in the next few years machines will be autonomously able to decide how, where and when to kill. These next generation robots can function without any human supervision. Presently the Pentagon has a program code named Maven.4 This project is a follow-up to a successful deployment of drones as semi-autonomous warfare systems. An early Maven drone, based on the key source that I have consulted, is the Scan Eagle Surveillance drone that comes equipped with video surveillance cameras.5 These cameras at this stage are capable of capturing video images of a small-town and these images can be used through machine learning technologies for the purpose of face recognition of any targeted demographic group, inhabiting a small town, say all males aged between the ages of eighteen and thirty-two.6 Based on these demographic profiles, the drones then can attack the listed demographic group. The aspiration is to conduct the entire operation, from videography to destruction, autonomously by a fleet of drones without any human oversight.

The design, manufacturing and deployment of autonomous robots raise a complex set of legal and ethical issues. If a computerized machine decides to kill, how will blame be assigned? Would culpability be assigned to the programmer, the corporation that engineered the robot or the police/armed forces that housed the robots? As we engage with these tough moral issues, scholars keep turning back to Isaac Asimov’s science fiction stories, particularly Runaround published in his collection I, Robot in 1950.7 The short story Runaround, enunciated Asimov’s three laws concerning robots: “(1) A robot may not injure a human being or, through inaction, allow a human being to come to harm. (2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. (3)A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.”8

But with autonomous robots Asimov’s first law stands violated. Thus, in recent years a series of amendments have been suggested to Asimov’s three laws. The best amendments, in the field of robotics has been put forward by the Engineering and Physical Science Research Council of England. This body proposes five guidelines on its website: “(1) Robots are multi-use tools. Robots should not be designed solely or primarily to kill or harm humans, except in the interests of national security. (2)Humans, not Robots, are responsible agents. Robots should be designed and operated as far as practicable to comply with existing laws, fundamental rights and freedoms, including privacy. (3) Robots are products. They should be designed using processes which assure their safety and security. (4) Robots are manufactured artifacts. They should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent. (5) The person with legal responsibility for a robot should be attributed.9”

Although it is still fairly early to see how these guidelines will be embedded in law, at the same time it is increasingly clear that in the years to come not only the Canadian Criminal Code will have to be amended to take into account these massive changes in technology but also international treaties and conventions that govern rules of warfare and human rights. 10

Picture retrieved from Wikimedia Commons.


1 See, Zach Vasquez, “The Truth about Killer Robots: The Year’s Most Terrifying Documentary,” The Guardian, November 26, 2018.

2 The above reconstruction of the Johnson killing is based on the details provided in . Retrieved on February 20, 2019.

3 Ibid

4 My understanding of Project Maven and the description provided here of the technology the Project seeks to develop is completely based on the letter written to Google management by the International Committee for Robot Arms Control (ICRAC). The letter can be seen at .Retrieved on February 21, 2019.

5 Ibid

6 Ibid

7 I was first alerted to Isaac Asimov’s science-fiction stories through The Guardian piece cited in footnote 1.

8 Quoted in Retrieved February 21, 2019.

10 For a beginning in this direction see the excellent report by Ivan Semenuik, “International Scientists Urge UN to Ban Lethal Robots,” The Globe and Mail, February 16, 2019.

  • Facebook Basic Black
  • Twitter Basic Black
bottom of page