Description
(Presented during the recurring Business Ethics seminar)The right to explanation is the putative right of data subjects to an explanation of how and why the algorithmic decisions that affect them are made. Recognizing a right to explanation is one solution to the problem of algorithmic opacity, or the inscrutability of AI-powered decision-making algorithms that affect important aspects of our lives. Current discussions of the right to explanation often focus on the question of what an adequate explanation of algorithmic decisions would look like, a particularly pressing question in view of the complex machine learning algorithms that facilitate these decisions. The main aim of this paper is to situate these recent discussions in the context of what philosophers call epistemic injustice, paying special attention to the role of business in explaining (and justifying) algorithmic decisions.
The central argument I develop is that implementing the right to explanation risks drawing on epistemic practices that are liable to exclude, marginalize or otherwise harm certain communities of knowers. To show this, I first examine the grounds on which the right to explanation is usually defended: to enable self-advocacy, to support deliberative agency, and to facilitate algorithmic accountability. I then demonstrate that each of these grounds involves potentially unjust epistemic practices. Indeed, giving, demanding, and contesting explanations of algorithmic decisions does not occur in a vacuum, but against the background of social conditions that disadvantage some social groups in the economy of knowledge and belief. It is important for industry and other stakeholders to recognize this if we want to ensure equitable access to algorithmic explanations and avoid further marginalizing groups already vulnerable to exploitation and algorithmic bias.
Period | 13 Oct 2023 |
---|---|
Held at | Faculty of Economics and Business, University of Groningen, Netherlands |
Degree of Recognition | Regional |
Keywords
- inclusive AI
- human-centered AI
- AI ethics
- epistemic injustice