Dorsal anterior cingulate-brainstem ensemble as a reinforcement meta-learner

Massimo Silvetti, Eliana Vassena, Elger Abrahamse, Tom Verguts

Research output: Contribution to journalArticleScientificpeer-review

Abstract

Optimal decision-making is based on integrating information from several dimensions of decisional space (e.g., reward expectation, cost estimation, effort exertion). Despite considerable empirical and theoretical efforts, the computational and neural bases of such multidimensional integration have remained largely elusive. Here we propose that the current theoretical stalemate may be broken by considering the computational properties of a cortical-subcortical circuit involving the dorsal anterior cingulate cortex (dACC) and the brainstem neuromodulatory nuclei: ventral tegmental area (VTA) and locus coeruleus (LC). From this perspective, the dACC optimizes decisions about stimuli and actions, and using the same computational machinery, it also modulates cortical functions (meta-learning), via neuromodulatory control (VTA and LC). We implemented this theory in a novel neuro-computational model-the Reinforcement Meta Learner (RML). We outline how the RML captures critical empirical findings from an unprecedented range of theoretical domains, and parsimoniously integrates various previous proposals on dACC functioning.

Original languageEnglish
Pages (from-to)e1006370
JournalPLOS Computational Biology
Volume14
Issue number8
DOIs
Publication statusPublished - Aug 2018
Externally publishedYes

Keywords

  • Animals
  • Brain Mapping/methods
  • Brain Stem/physiology
  • Cognition/physiology
  • Computer Simulation
  • Decision Making/physiology
  • Gyrus Cinguli/physiology
  • Humans
  • Learning/physiology
  • Locus Coeruleus/physiology
  • Models, Theoretical
  • Reinforcement, Psychology
  • Reward
  • Ventral Tegmental Area/physiology

Fingerprint

Dive into the research topics of 'Dorsal anterior cingulate-brainstem ensemble as a reinforcement meta-learner'. Together they form a unique fingerprint.

Cite this