Concept Learning in a Probabilistic Language of Thought. How is it possible and what does it presuppose?

Research output: Contribution to journalComment/Letter to the editorScientificpeer-review

Abstract

Where does a probabilistic language-of-thought (PLoT) come from? How can we learn new concepts based on probabilistic inferences operating on a PLoT? Here, I explore these questions, sketching a traditional circularity objection to LoT and canvassing various approaches to addressing it. I conclude that PLoT-based cognitive architectures can support genuine concept learning; but, currently, it is unclear that they enjoy more explanatory breadth in relation to concept learning than alternative architectures that do not posit any LoT.
Original languageEnglish
Article number271
JournalBehavioral and Brain Sciences
Volume46
DOIs
Publication statusPublished - 2023

Fingerprint

Dive into the research topics of 'Concept Learning in a Probabilistic Language of Thought. How is it possible and what does it presuppose?'. Together they form a unique fingerprint.

Cite this