Why the proposed artificial intelligence regulation does not deliver on the promise to protect individuals from harm

Tetyana Krupiy

Research output: Online publication or Non-textual formWeb publication/sitePopular

Abstract

In April 2021 the European Commission circulated a draft proposal called ‘Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts’ (hereinafter AI Regulation). The purpose of the proposed AI Regulation is to harmonise the rules governing artificial intelligence technology (hereinafter AI) in the European Union (hereinafter EU) in a manner that addresses ethical and human rights concerns (p. 1, para. 1.1). This blog post argues that the proposed AI Regulation does not sufficiently protect individuals from harms arising from the use of AI technology. One of the reasons for this is that policy makers did not engage with the limitations in international human rights treaties and the EU Charter regarding the protection of fundamental rights in the digital context. If policy makers want to achieve their objective to develop ‘an ecosystem of trust’ by adopting a legal framework on ‘trustworthy’ AI (p. 1, para. 1.1), then they need to amend the draft AI Regulation. Individuals will find it hard to place trust in the use of AI technology if the Regulation does not sufficiently safeguard their interests and fundamental rights. This contribution will use the prohibition of discrimination to illustrate these concerns. First, it will be shown that international human rights law inadequately protects human diversity. As a result of not addressing this issue policy makers failed to detect that the representation of individuals in AI mathematical models distorts their identities and undermines the protection of human diversity. Second, it will be demonstrated that the definition of discrimination by reference to adverse treatment of individuals on the basis of innate characteristics leads to insufficient protection of individuals in the digital context.
Original languageEnglish
Place of Publicationhttps://europeanlawblog.eu
Media of outputOnline
Publication statusPublished - 23 Jul 2021

Keywords

  • artificial intelligence
  • discrimination
  • AI Regulation

Fingerprint

Dive into the research topics of 'Why the proposed artificial intelligence regulation does not deliver on the promise to protect individuals from harm'. Together they form a unique fingerprint.

Cite this