Abstract
The article addresses how the rules of targeting regulate lethal autonomous robots. Since the rules of targeting are addressed to human decision-makers, there is a need for clarification of what qualities lethal autonomous robots would need to possess in order to approximate human decision-making and to apply these rules to battlefield scenarios. The article additionally analyses state practice in order to propose how the degree of certainty required by the principle
of distinction may be translated into a numerical value. The reliability rate with which lethal autonomous robots need to function is identified. The article then analyses whether the employment of three categories of robots complies with the rules of targeting. The first category covers robots which work on a fixed algorithm. The second category pertains to robots that have artificial intelligence and that learn from the experience of being exposed to battlefield scenarios. The third category relates to robots that emulate the working of a human brain.
of distinction may be translated into a numerical value. The reliability rate with which lethal autonomous robots need to function is identified. The article then analyses whether the employment of three categories of robots complies with the rules of targeting. The first category covers robots which work on a fixed algorithm. The second category pertains to robots that have artificial intelligence and that learn from the experience of being exposed to battlefield scenarios. The third category relates to robots that emulate the working of a human brain.
Original language | English |
---|---|
Pages (from-to) | 2-58 |
Number of pages | 56 |
Journal | Melbourne Journal of International Law |
Volume | 16 |
Issue number | 1 |
Publication status | Published - 1 Jun 2015 |
Externally published | Yes |
Keywords
- lethal autonomous robots, rules of targeting, international humanitarian law