Contesting automated decisions: A view of transparency implications

Research output: Contribution to journalArticleScientificpeer-review

26 Downloads (Pure)


This paper identifies the essentials of a ‘transparency model’ which aims to scrutinise automated data-driven decision-making systems not by the mechanisms of their operation but rather by the normativity embedded in their behaviour/action. First, transparency-related concerns and challenges inherent in machine learning are conceptualised as ‘informational asymmetries’, concluding that the transparency requirements for the effective contestation of automated decisions go far beyond the mere disclosure of algorithms. Next, essential components of a rule-based ‘transparency model’ are described as: i) the data as ‘decisional input’, ii) the ‘normativities’ contained by the system both at the inference and decision (rule-making) level, iii) the context and further implications of the decision, and iv) the accountable actors.
Original languageEnglish
Pages (from-to)433-446
Number of pages14
Issue number4
Publication statusPublished - 1 Dec 2018


Dive into the research topics of 'Contesting automated decisions: A view of transparency implications'. Together they form a unique fingerprint.

Cite this