Manipulative phantoms in the machine: A legal examination of large language model hallucinations on human opinion formation

Research output: Chapter in Book/Report/Conference proceedingChapterScientificpeer-review

Abstract

This paper investigates the novel implications of Large Language Model (LLM) hallucinations on cognitive liberty, the formation of informed opinions, and the potential for manipulative influence, especially in socio-psychological, academic and politically sensitive contexts. Employing a multidisciplinary methodology, the study integrates legal analysis to dissect the mechanisms driving LLM hallucinations. The analysis reveals the plausible risk for these hallucinations to distort public discourse, influence opinion formation, and propagate misinformation, thereby creating an unprecedented vulnerability in human-computer interactions. This study analyses existing legal frameworks, such as the EU AI Act, consumer protection law and Freedom of Thought, assessing their adequacy in addressing the manipulative impact of LLM hallucinations on independent human cognition.
Original languageEnglish
Title of host publicationPrivacy and identity management
Subtitle of host publicationGenerating futures
PublisherSpringer Nature Link
Pages59-77
Number of pages19
Volume705
ISBN (Electronic)978-3-031-91054-8
ISBN (Print)978-3-031-91053-1
DOIs
Publication statusE-pub ahead of print - 22 May 2025

Publication series

NameIFIP Advances in Information and Communication Technology (IFIPAICT, volume 705)

Keywords

  • LLM Hallucinations
  • Generative AI
  • Freedom of Thought
  • manipulation
  • Affective Computing

Fingerprint

Dive into the research topics of 'Manipulative phantoms in the machine: A legal examination of large language model hallucinations on human opinion formation'. Together they form a unique fingerprint.

Cite this