In the future, a growing number of combat operations will be carried out by autonomous weapon systems (AWS). At the operational level, AWS would not rely on direct human input. Taking humans out of the loop will raise questions of the compatibility of AWS with the fundamental requirements of international humanitarian law (IHL), such as the principles of distinction and proportionality, as well as complicate allocation of responsibility for war crimes and crimes against humanity.
This Article addresses the development toward greater autonomy in military technology along three dimensions: legal, ethical, and political concerns. First, it analyzes the potential dehumanizing effect of AWS with respect to the principles of distinction and proportionality and criminal responsibility.
Second, this Article explores, from an ethical perspective, the advantages and disadvantages of the deployment of AWS independent of legal considerations. Authors from various fields have weighed in on this debate, but oftentimes without linking their discourse to legal questions. This Article fills this gap by bridging these disparate discourses and suggests that there are important ethical reasons that militate against the use of AWS.
Third, this Article argues that the introduction of AWS alters the risk calculus of whether to engage in or prolong an armed conflict. This alteration is likely to make that decision politically more palatable and less risky for the political decision makers.