On the consistency of information filters for lazy learning algorithms

H Brighton*, C Mellish

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

Abstract

A common practice when filtering a case-base is to employ a filtering scheme that decides which cases to delete, as well as how many cases to delete, such that the storage requirements are minimized and the classification competence is preserved or improved. We introduce an algorithm that rivals the most successful existing algorithm in the average case when filtering 30 classification problems. Neither algorithm consistently outperforms the other, with each performing well on different problems. Consistency over many domains, we argue, is very hard to achieve when deploying a filtering algorithm.

Original languageEnglish
Title of host publicationPRINCIPLES OF DATA MINING AND KNOWLEDGE DISCOVERY
EditorsJM Zytkow, J Rauch
PublisherSPRINGER-VERLAG BERLIN
Pages283-288
Number of pages6
ISBN (Print)3540664904
Publication statusPublished - 1999
Externally publishedYes
Event3rd European Conference on Principles of Data Mining and Knowledge Discovery in Databases (PKDD 99) - PRAGUE, Czech Republic
Duration: 15 Sep 199918 Sep 1999

Publication series

NameLECTURE NOTES IN ARTIFICIAL INTELLIGENCE
PublisherSPRINGER-VERLAG BERLIN
Volume1704
ISSN (Print)0302-9743

Conference

Conference3rd European Conference on Principles of Data Mining and Knowledge Discovery in Databases (PKDD 99)
CountryCzech Republic
CityPRAGUE
Period15/09/9918/09/99

Cite this

Brighton, H., & Mellish, C. (1999). On the consistency of information filters for lazy learning algorithms. In JM. Zytkow, & J. Rauch (Eds.), PRINCIPLES OF DATA MINING AND KNOWLEDGE DISCOVERY (pp. 283-288). (LECTURE NOTES IN ARTIFICIAL INTELLIGENCE; Vol. 1704). SPRINGER-VERLAG BERLIN.