Cognitive freedom and legal accountability: Rethinking the EU AI act’s theoretical approach to manipulative AI as unacceptable risk

Research output: Contribution to journalArticleScientificpeer-review

Abstract

This paper examines the profound challenges posed by manipulative artificial intelligence (AI) and critically evaluates the adequacy of the EU AI Act in mitigating these threats. Modern AI technologies possess the capability to influence human cognition and behaviour imperceptibly, thus endangering cognitive freedom, the fundamental right to autonomous thought. Although the EU AI Act classifies manipulative AI as an unacceptable risk and prohibits its deployment, its current framework, characterized by imprecise definitions and regulatory gaps, undermines its efficacy in holding entities accountable and safeguarding individuals. To address these deficiencies, this paper introduces an innovative analytical method that traces the origins of manipulation, enabling a systematic understanding of the harm. Central to this discussion is the expanded concept of cognitive freedom, which transcends conventional notions of thought rights to encompass protection from covert digital influence. Through illustrative case studies, such as the use of psychographic profiling in political campaigns, the paper elucidates how data-driven methodologies can be harnessed to subtly mould public perception and decision-making. The analysis further investigates digital design strategies, including targeted advertising and algorithmic curation, which constrain user autonomy and erode independent judgment. The paper advocates for a restructured EU AI Act that incorporates precise definitions, mandatory transparency and continuous oversight by independent, multidisciplinary bodies. Such enhancements would strengthen the detection and regulation of manipulative AI practices. By embedding cognitive freedom within legal protections and proposing real-time audits and comprehensive ethical assessments, this paper outlines a strategic pathway for preserving cognitive autonomy. This approach aims to mitigate the erosion of mental sovereignty and uphold the essential principles of independent thought and informed decision-making within the rapidly evolving digital landscape.
Original languageEnglish
Article numbere20
Number of pages28
JournalCambridge Forum on AI: Law and Governance
Volume1
DOIs
Publication statusE-pub ahead of print - 16 May 2025

Keywords

  • Freedom of Thought
  • AI ethics
  • Manipulation
  • AI Act
  • Cognitive Freedom

Fingerprint

Dive into the research topics of 'Cognitive freedom and legal accountability: Rethinking the EU AI act’s theoretical approach to manipulative AI as unacceptable risk'. Together they form a unique fingerprint.

Cite this