Abstract
This contribution discusses and compares human rights-based frameworks or ‘models’ used by authors in literature for assessing AI’s problems. In AI research, human rights assessments have been used as a burgeoning methodology for identifying and giving account of technology’s negative implications. However, there has been little critique of the strenghts and weakness of the formal human rights framework in providing proper account of AI’s problems. This paper attempts to fill this gap. Because AI’application involves diverse interests, scholars have differently identified AI’s implications through a human rights lens. The prominent model focuses on privacy, but there is growing recognition about privacy’s limited role ahead of AI’s ubiquity. Additionally, holistic and selective human rights models are used. This contribution draws on literature review to discuss these models, testing them against theories of pragmatic problem identification. The selective model resonates with pragmatic approaches to problem assessment. However, human rights present inherent limits as a problem assessement framework ahead of (new) AI realities and problems. We suggest that using too formal, closed, and rigid frameworks may impede representing current and new AI issues. Future research shall be geared towards exploring more pragmatic methodologies to AI problem assessement.
| Original language | English |
|---|---|
| Pages (from-to) | 1139-1162 |
| Number of pages | 24 |
| Journal | International Journal of Human Rights |
| Volume | 29 |
| Issue number | 6 |
| DOIs | |
| Publication status | Published - 2025 |
| Externally published | Yes |
Keywords
- human rights
- artificial intelligence
- pragmatic problem-finding
- privacy
- holistic human rights
- selective human rights