Forecasting earnings using k-nearest neighbor classification

Peter Easton, Martin Kapons, S. Monahan, Harm Schütt, Eric H. Weisbrod

Research output: Contribution to journalArticleScientificpeer-review

Abstract

We use a simple k-nearest neighbors (k-NN) model to forecast a subject firm’s annual earnings by matching its recent earnings history to earnings histories of comparable firms, and then extrapolating the forecast from the comparable firms’ lead earnings. Out-of-sample forecasts generated by our model are more accurate than forecasts generated by the random walk; more complicated k-NN models; the matching approach developed by Blouin, Core, and Guay (2010); and popular regression models. These results are robust. Our model’s superiority holds for different error metrics, for firms that are followed by analysts and firms that are not, and for different forecast horizons. Our model also generates a novel ex ante indicator of forecast inaccuracy. This indicator, which equals the interquartile range of the comparable firms’ lead earnings, is reliable and useful. It predicts forecast accuracy and it identifies situations when our forecasts are strong (weak) predictors of future stock returns.
Original languageEnglish
Pages (from-to)115-140
JournalAccounting Review
Volume99
Issue number3
DOIs
Publication statusPublished - May 2024

Keywords

  • earnings
  • forecasting
  • machine learning

Fingerprint

Dive into the research topics of 'Forecasting earnings using k-nearest neighbor classification'. Together they form a unique fingerprint.

Cite this