The irresponsibility of not using AI in the military

Herwin Meerveld, R. H.A. Lindelauf*, E. O. Postma, M. Postma

*Corresponding author for this work

Research output: Contribution to journalArticleScientificpeer-review

2 Citations (Scopus)

Abstract

The ongoing debate on the ethics of using artificial intelligence (AI) in military contexts has been negatively impacted by the predominant focus on the use of lethal autonomous weapon systems (LAWS) in war. However, AI technologies have a considerably broader scope and present opportunities for decision support optimization across the entire spectrum of the military decision-making process (MDMP). These opportunities cannot be ignored. Instead of mainly focusing on the risks of the use of AI in target engagement, the debate about responsible AI should (i) concern each step in the MDMP, and (ii) take ethical considerations and enhanced performance in military operations into account. A characterization of the debate on responsible AI in the military, considering both machine and human weaknesses and strengths, is provided in this paper. We present inroads into the improvement of the MDMP, and thus military operations, through the use of AI for decision support, taking each quadrant of this characterization into account.

Original languageEnglish
Article number14
Pages (from-to)1-6
Number of pages6
JournalEthics and Information Technology
Volume25
Issue number1
DOIs
Publication statusPublished - Mar 2023

Keywords

  • Decision-making
  • Intelligence cycle
  • Military decision-making process
  • Responsible AI

Fingerprint

Dive into the research topics of 'The irresponsibility of not using AI in the military'. Together they form a unique fingerprint.

Cite this