Artificial Intelligence and Moral Reasoning: Shifting Moral Responsibility in War?
Artificial Intelligence and Global Security
ISBN: 978-1-78973-812-4, eISBN: 978-1-78973-811-7
Publication date: 15 July 2020
Abstract
It is no longer merely far-fetched science fiction to think that robots will be the chief combatants, waging wars in place of humans. Or is it? While artificial intelligence (AI) has made remarkable strides, tempting us to personify the machines “making decisions” and “choosing targets”, a more careful analysis reveals that even the most sophisticated AI can only be an instrument rather than an agent of war. After establishing the layered existential nature of war, we lay out the prerequisites for being a (moral) agent of war. We then argue that present AI falls short of this bar, and we have strong reason to think this will not change soon. With that in mind, we put forth a second argument against robots as agents: there is a continuum with other clearly nonagential tools of war, like swords and chariots. Lastly, we unpack what this all means: if AI does not add another moral player to the battlefield, how (if at all) should AI change the way we think about war?
Keywords
Citation
Kaurin, P.S. and Hart, C.T. (2020), "Artificial Intelligence and Moral Reasoning: Shifting Moral Responsibility in War?", Masakowski, Y.R. (Ed.) Artificial Intelligence and Global Security, Emerald Publishing Limited, Leeds, pp. 121-136. https://doi.org/10.1108/978-1-78973-811-720201007
Publisher
:Emerald Publishing Limited
Copyright © 2020 Emerald Publishing Limited