Operationalising responsible AI in the military domain: a context-specific assessment

Research output: Contribution to journalArticleScientificpeer-review

Abstract

Rapid integration of Artificial Intelligence (AI) into the military domain necessitates actionable strategies for translating high-level principles of responsible use into practical guidelines. However, there remains a problematic gap between these principles and the norms that govern the use of AI in military operations. Moreover, these norms are highly dependent on the particular context in which military AI is deployed. This leads to normative uncertainty; what is responsible use of AI in a specific military operation? Unclear practical guidelines pose challenges for technology developers and military operators involved in the deployment of military AI. This paper emphasises the need for a context-specific assessment of responsible use of military AI. Moving beyond a one-size-fits-all standard, we propose the Military AI Responsibility Contextualisation (MARC) framework; a structured approach that facilitates a context-specific assessment. In that way, this paper aims to contribute to bridging the gap between abstract principles and practical guidelines. We furthermore emphasise the need for interdisciplinary collaboration in further operationalizing responsible military AI to work towards the ethical and effective development and deployment of AI in military operations.
Original languageEnglish
Article number48
Number of pages11
JournalEthics and Information Technology
Volume27
DOIs
Publication statusPublished - 6 Oct 2025

Keywords

  • Artificial Intelligence
  • Contextualisation
  • Military
  • Operationalisation
  • Responsibility

Fingerprint

Dive into the research topics of 'Operationalising responsible AI in the military domain: a context-specific assessment'. Together they form a unique fingerprint.

Cite this