Abstract
The basic nearest neighbour classifier suffers from the indiscriminate storage of all presented training instances. With a large database of instances classification response time can be slow. When noisy instances are present classification accuracy can suffer. Drawing on the large body of relevant work carried out in the past 30 years, we review the principle approaches to solving these problems. By deleting instances, both problems can be alleviated, but the criterion used is typically assumed to be all encompassing and effective over many domains. We argue against this position and introduce an algorithm that rivals the most successful existing algorithm. When evaluated on 30 different problems, neither algorithm consistently outperforms the other: consistency is very hard. To achieve the best results, we need to develop mechanisms that provide insights into the structure of class definitions. We discuss the possibility of these mechanisms and propose some initial measures that could be useful for the data miner.
Original language | English |
---|---|
Pages (from-to) | 153-172 |
Number of pages | 20 |
Journal | Data Mining and Knowledge Discovery |
Volume | 6 |
Issue number | 2 |
DOIs | |
Publication status | Published - Apr 2002 |
Externally published | Yes |
Keywords
- instance-based learning
- instance selection
- forgetting
- pruning
- NEAREST-NEIGHBOR RULE
- CLASSIFICATION