Abstract
This chapter examines the possibility of using AI technologies to improve human moral reasoning and decision-making, especially in the context of purchasing and consumer decisions. We characterize such AI technologies as artificial ethics assistants (AEAs). We focus on just one part of the AI-aided moral improvement question: the case of the individual who wants to improve their morality, where what constitutes an improvement is evaluated by the individual’s own values. We distinguish three broad areas in which an individual might think their own moral reasoning and decision-making could be improved: one’s actions, character, or other evaluable attributes fall short of one’s values and moral beliefs; one sometimes misjudges or is uncertain about what the right thing to do is in particular situations, given one’s values; one is uncertain about some fundamental moral questions or recognizes a possibility that some of one’s core moral beliefs and values are mistaken. We sketch why one might think that AI tools could be used to support moral improvement in those areas, and describe two types of assistance: preparatory assistance, including advice and training supplied in advance of moral deliberation; and on-the-spot assistance, including on-the-spot advice and facilitation of moral functioning over the course of moral deliberation. Then, we turn to some of the ethical issues that AEAs might raise, looking in particular at three under-appreciated problems posed by the use of AI for moral self-improvement: namely, reliance on sensitive moral data; the inescapability of outside influences on AEAs; and AEA usage prompting the user to adopt beliefs and make decisions without adequate reasons.
Original language | English |
---|---|
Title of host publication | Oxford Handbook of Digital Ethics |
Editors | Carissa Véliz |
Publisher | Oxford University |
Chapter | 17 |
Pages | 312-335 |
DOIs | |
Publication status | Published - 2023 |