Abstract
The AI Ethics literature has identified many forms of harm caused, perpetuated or exacerbated by artificial intelligence (AI). One, however, has been overlooked. In this paper we argue that the increasing use of AI heightens the risk of 'hermeneutic harm', which occurs when people are unable to make sense of, or come to terms with, unexpected, unwelcome, or harmful events they experience. We develop several examples to support our argument that AI increases the risk of hermeneutic harm. Importantly, our argument makes no assumption of flawed design, biased training data, or misuse: hermeneutic harm can occur regardless. Explainable AI (XAI) could plausibly reduce the risk of hermeneutic harm in some cases. Thus, one respect in which this paper advances the field is that it shows the need to further broaden XAI's understanding of the social function of explanation. Yet XAI cannot fully mitigate the risk of hermeneutic harm, which (as our choice of examples shows) would persist even if all 'black-box' problems of system opacity were to be solved. The paper thus highlights an important but underexplored risk posed by AI systems.
| Original language | English |
|---|---|
| Article number | 33 |
| Number of pages | 18 |
| Journal | Minds and Machines |
| Volume | 35 |
| Issue number | 3 |
| DOIs | |
| Publication status | Published - 24 Jul 2025 |
| Externally published | Yes |
Keywords
- AI Ethics
- Artificial Intelligence
- Explainable AI
- Hermeneutic Harm
- Laws
- Sense-making
- Transparency