Contesting automated decisions

A view of transparency implications

Research output: Contribution to journalArticleScientificpeer-review

Abstract

This paper identifies the essentials of a ‘transparency model’ which aims to scrutinise automated data-driven decision-making systems not by the mechanisms of their operation but rather by the normativity embedded in their behaviour/action. First, transparency-related concerns and challenges inherent in machine learning are conceptualised as ‘informational asymmetries’, concluding that the transparency requirements for the effective contestation of automated decisions go far beyond the mere disclosure of algorithms. Next, essential components of a rule-based ‘transparency model’ are described as: i) the data as ‘decisional input’, ii) the ‘normativities’ contained by the system both at the inference and decision (rule-making) level, iii) the context and further implications of the decision, and iv) the accountable actors.
Original languageEnglish
Pages (from-to)433-446
Number of pages14
JournalEDPL
Volume4
Issue number4
DOIs
Publication statusPublished - 1 Dec 2018

Fingerprint

Transparency
Learning systems
Decision making

Cite this

@article{a721007fae5b462d9333136fd474451e,
title = "Contesting automated decisions: A view of transparency implications",
abstract = "This paper identifies the essentials of a ‘transparency model’ which aims to scrutinise automated data-driven decision-making systems not by the mechanisms of their operation but rather by the normativity embedded in their behaviour/action. First, transparency-related concerns and challenges inherent in machine learning are conceptualised as ‘informational asymmetries’, concluding that the transparency requirements for the effective contestation of automated decisions go far beyond the mere disclosure of algorithms. Next, essential components of a rule-based ‘transparency model’ are described as: i) the data as ‘decisional input’, ii) the ‘normativities’ contained by the system both at the inference and decision (rule-making) level, iii) the context and further implications of the decision, and iv) the accountable actors.",
author = "Emre Bayamlioglu",
year = "2018",
month = "12",
day = "1",
doi = "10.21552/edpl/2018/4/6",
language = "English",
volume = "4",
pages = "433--446",
journal = "EDPL",
publisher = "Lexxion",
number = "4",

}

Contesting automated decisions : A view of transparency implications. / Bayamlioglu, Emre.

In: EDPL, Vol. 4, No. 4, 01.12.2018, p. 433-446.

Research output: Contribution to journalArticleScientificpeer-review

TY - JOUR

T1 - Contesting automated decisions

T2 - A view of transparency implications

AU - Bayamlioglu, Emre

PY - 2018/12/1

Y1 - 2018/12/1

N2 - This paper identifies the essentials of a ‘transparency model’ which aims to scrutinise automated data-driven decision-making systems not by the mechanisms of their operation but rather by the normativity embedded in their behaviour/action. First, transparency-related concerns and challenges inherent in machine learning are conceptualised as ‘informational asymmetries’, concluding that the transparency requirements for the effective contestation of automated decisions go far beyond the mere disclosure of algorithms. Next, essential components of a rule-based ‘transparency model’ are described as: i) the data as ‘decisional input’, ii) the ‘normativities’ contained by the system both at the inference and decision (rule-making) level, iii) the context and further implications of the decision, and iv) the accountable actors.

AB - This paper identifies the essentials of a ‘transparency model’ which aims to scrutinise automated data-driven decision-making systems not by the mechanisms of their operation but rather by the normativity embedded in their behaviour/action. First, transparency-related concerns and challenges inherent in machine learning are conceptualised as ‘informational asymmetries’, concluding that the transparency requirements for the effective contestation of automated decisions go far beyond the mere disclosure of algorithms. Next, essential components of a rule-based ‘transparency model’ are described as: i) the data as ‘decisional input’, ii) the ‘normativities’ contained by the system both at the inference and decision (rule-making) level, iii) the context and further implications of the decision, and iv) the accountable actors.

U2 - 10.21552/edpl/2018/4/6

DO - 10.21552/edpl/2018/4/6

M3 - Article

VL - 4

SP - 433

EP - 446

JO - EDPL

JF - EDPL

IS - 4

ER -