Automatic error localisation for cateogrical, continuous and integer data

Research output: Contribution to journalArticleScientificpeer-review


Data collected by statistical offices generally contain errors, which have to be corrected before reliable data can be published. This correction process is referred to as statistical data editing. At statistical offices, certain rules, so-called edits, are often used during the editing process to determine whether a record is consistent or not. Inconsistent records are considered to contain errors, while consistent records are considered error-free. In this article we focus on automatic error localisation based on the Fellegi-Holt paradigm, which says that the data should be made to satisfy all edits by changing the fewest possible number of fields. Adoption of this paradigm leads to a mathematical optimisation problem. We propose an algorithm to solve this optimisation problem for a mix of categorical, continuous and integer-valued data. We also propose a heuristic procedure based on the exact algorithm. For five realistic data sets involving only integer-valued variables we evaluate the performance of this heuristic procedure.
Original languageEnglish
Pages (from-to)57-99
JournalStatistics and Operations Research Transactions
Issue number1
Publication statusPublished - 2005
Externally publishedYes


Dive into the research topics of 'Automatic error localisation for cateogrical, continuous and integer data'. Together they form a unique fingerprint.

Cite this