Abstract
In the presence of model risk, it is well-established to replace classical expected values by worst-case expectations over all models within a fixed radius from a given reference model. This is the "robustness" approach. For the class of F-divergences, we provide a careful assessment of how the interplay between reference model and divergence measure shapes the contents of uncertainty sets. We show that the classical divergences, relative entropy and polynomial divergences, are inadequate for reference models which are moderately heavy-tailed such as lognormal models. Worst cases are either infinitely pessimistic, or they rule out the possibility of fat-tailed "power law" models as plausible alternatives. Moreover, we rule out the existence of a single F-divergence which is appropriate regardless of the reference model. Thus, the reference model should not be neglected when settling on any particular divergence measure in the robustness approach.
Original language | English |
---|---|
Pages (from-to) | 428-435 |
Journal | Operations Research |
Volume | 67 |
Issue number | 2 |
DOIs | |
Publication status | Published - Mar 2019 |
Keywords
- F-divergence
- Kullback-Leibler divergence
- heavy tails
- model risk
- robustness