Rule of Thumb and Dynamic Programming

M. Lettau, H.F.H.V.S. Uhlig

Research output: Working paperDiscussion paperOther research output

298 Downloads (Pure)


This paper studies the relationships between learning about rules of thumb (represented by classifier systems) and dynamic programming. Building on a result about Markovian stochastic approximation algorithms, we characterize all decision functions that can be asymptotically obtained through classifier system learning, provided the asymptotic ordering of the classifiers is strict. We demonstrate in a robust example that the learnable decision function is in general not unique, not characterized by a strict ordering of the classifiers, and may not coincide with the decision function delivered by the solution to the dynamic programming problem even if that function is attainable. As an illustration we consider the puzzle of excess sensitivity of consumption to transitory income: classifier systems can generate such behavior even if one of the available rules of thumb is the decision function solving the dynamic programming problem, since bad decisions in good times can "feel better" than good decisions in bad times.
Original languageEnglish
Publication statusPublished - 1995

Publication series

NameCentER Discussion Paper


Dive into the research topics of 'Rule of Thumb and Dynamic Programming'. Together they form a unique fingerprint.

Cite this