Bridging the gap between AI and explainability in the GDPR: Towards trustworthiness-by-design in automated decision- making

Ronan Hamon, Hendrik Junklewitz, Ignacio Sanchez, Gianclaudio Malgieri, Paul de Hert

Research output: Contribution to journalArticleScientificpeer-review

34 Citations (Scopus)

Abstract

Can satisfactory explanations for complex machine learning models be achieved in high-risk automated decision-making? How can such explanations be integrated into a data protection framework safeguarding a right to explanation? This article explores from an interdisciplinary point of view the connection between existing legal requirements for the explainability of AI systems set out in the General Data Protection Regulation (GDPR) and the current state of the art in the field of explainable AI. It studies the challenges of providing human legible explanations for current and future AI-based decision-making systems in practice, based on two scenarios of automated decision-making in credit scoring risks and medical diagnosis of COVID-19. These scenarios exemplify the trend towards increasingly complex machine learning algorithms in automated decision-making, both in terms of data and models. Current machine learning techniques, in particular those based on deep learning, are unable to make clear causal links between input data and final decisions. This represents a limitation for providing exact, human-legible reasons behind specific decisions, and presents a serious challenge to the provision of satisfactory, fair and transparent explanations. Therefore, the conclusion is that the quality of explanations might not be considered as an adequate safeguard for automated decision-making processes under the GDPR. Accordingly, additional tools should be considered to complement explanations. These could include algorithmic impact assessments, other forms of algorithmic justifications based on broader AI principles, and new technical developments in trustworthy AI. This suggests that eventually all of these approaches would need to be considered as a whole.
Original languageEnglish
Pages (from-to)72-85
Number of pages14
JournalIEEE Computational Intelligence Magazine
Volume17
Issue number1
DOIs
Publication statusPublished - 1 Feb 2022

Fingerprint

Dive into the research topics of 'Bridging the gap between AI and explainability in the GDPR: Towards trustworthiness-by-design in automated decision- making'. Together they form a unique fingerprint.

Cite this