Abstract
The curse of dimensionality, which has been widely studied in statistics and machine learning, occurs when additional features cause the size of the feature space to grow so quickly that learning classification rules becomes increasingly difficult. How do people overcome the curse of dimensionality when acquiring real-world categories that have many different features? Here we investigate the possibility that the structure of categories can help. We show that when categories follow a family resemblance structure, people are unaffected by the presence of additional features in learning. However, when categories are based on a single feature, they fall prey to the curse, and having additional irrelevant features hurts performance. We compare and contrast these results to three different computational models to show that a model with limited computational capacity best captures human performance across almost all of the conditions in both experiments.
Original language | English |
---|---|
Article number | 12724 |
Pages (from-to) | e12724 |
Number of pages | 25 |
Journal | Cognitive Science |
Volume | 43 |
Issue number | 3 |
DOIs | |
Publication status | Published - Mar 2019 |
Keywords
- CATEGORIZATION
- CLASSIFICATION
- Category learning
- Curse of dimensionality
- IDENTIFICATION
- KNOWLEDGE
- MODELS
- Supervised learning