On the consistency of information filters for lazy learning algorithms

H Brighton*, C Mellish

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

46 Citations (Scopus)

Abstract

A common practice when filtering a case-base is to employ a filtering scheme that decides which cases to delete, as well as how many cases to delete, such that the storage requirements are minimized and the classification competence is preserved or improved. We introduce an algorithm that rivals the most successful existing algorithm in the average case when filtering 30 classification problems. Neither algorithm consistently outperforms the other, with each performing well on different problems. Consistency over many domains, we argue, is very hard to achieve when deploying a filtering algorithm.

Original languageEnglish
Title of host publicationPRINCIPLES OF DATA MINING AND KNOWLEDGE DISCOVERY
EditorsJM Zytkow, J Rauch
PublisherSPRINGER-VERLAG BERLIN
Pages283-288
Number of pages6
ISBN (Print)3540664904
Publication statusPublished - 1999
Externally publishedYes
Event3rd European Conference on Principles of Data Mining and Knowledge Discovery in Databases (PKDD 99) - PRAGUE, Czech Republic
Duration: 15 Sept 199918 Sept 1999

Publication series

NameLECTURE NOTES IN ARTIFICIAL INTELLIGENCE
PublisherSPRINGER-VERLAG BERLIN
Volume1704
ISSN (Print)0302-9743

Conference

Conference3rd European Conference on Principles of Data Mining and Knowledge Discovery in Databases (PKDD 99)
Country/TerritoryCzech Republic
CityPRAGUE
Period15/09/9918/09/99

Fingerprint

Dive into the research topics of 'On the consistency of information filters for lazy learning algorithms'. Together they form a unique fingerprint.

Cite this