Do human rights frameworks identify AI’s problems? The limits of a burgeoning methodology for AI problem assessment

Research output: Contribution to journalArticleScientificpeer-review

Abstract

This contribution discusses and compares human rights-based frameworks or ‘models’ used by authors in literature for assessing AI’s problems. In AI research, human rights assessments have been used as a burgeoning methodology for identifying and giving account of technology’s negative implications. However, there has been little critique of the strenghts and weakness of the formal human rights framework in providing proper account of AI’s problems. This paper attempts to fill this gap. Because AI’application involves diverse interests, scholars have differently identified AI’s implications through a human rights lens. The prominent model focuses on privacy, but there is growing recognition about privacy’s limited role ahead of AI’s ubiquity. Additionally, holistic and selective human rights models are used. This contribution draws on literature review to discuss these models, testing them against theories of pragmatic problem identification. The selective model resonates with pragmatic approaches to problem assessment. However, human rights present inherent limits as a problem assessement framework ahead of (new) AI realities and problems. We suggest that using too formal, closed, and rigid frameworks may impede representing current and new AI issues. Future research shall be geared towards exploring more pragmatic methodologies to AI problem assessement.
Original languageEnglish
Pages (from-to)1139-1162
Number of pages24
JournalInternational Journal of Human Rights
Volume29
Issue number6
DOIs
Publication statusPublished - 2025
Externally publishedYes

Keywords

  • human rights
  • artificial intelligence
  • pragmatic problem-finding
  • privacy
  • holistic human rights
  • selective human rights

Fingerprint

Dive into the research topics of 'Do human rights frameworks identify AI’s problems? The limits of a burgeoning methodology for AI problem assessment'. Together they form a unique fingerprint.

Cite this