The ethics of autonomous technologies: Does AI face a reponsibility gap?

Research output: Chapter in Book/Report/Conference proceedingChapterScientificpeer-review

Abstract

There are several reasons to be ethically concerned about the development and use of AI. In this contribution, we focus on one specific theme of concern: moral responsibility. In particular, we consider whether the use of autonomous AI causes a responsibility gap and put forward the thesis is that this is not the case. Our argument proceeds as follows. First, we provide some conceptual background by discussing respectively what autonomous systems are, how the notion of responsibility can be understood, and what the responsibility gap is about. Second, we explore to which extent it could make sense to assign responsibility to artificial systems. Third, we argue that the use of autonomous systems does not necessarily lead to a responsibility gap. In the fourth and last section of this chapter, we set out why the responsibility gap – even if it were to exist – is not necessarily problematic.
Original languageEnglish
Title of host publicationThe Cambridge handbook of the law, ethics and policy of artificial intelligence
EditorsNathalie A. Smuha
PublisherCambridge University Press
Chapter5
Pages101-116
Number of pages16
ISBN (Electronic)9781009367783
ISBN (Print)9781009367813
DOIs
Publication statusPublished - 6 Feb 2025
Externally publishedYes

Publication series

NameCambridge Law Handbooks
PublisherCambridge University Press

Keywords

  • responsibility gap
  • autonomous systems
  • artificial intelligence
  • morality

Fingerprint

Dive into the research topics of 'The ethics of autonomous technologies: Does AI face a reponsibility gap?'. Together they form a unique fingerprint.

Cite this