Abstract
Original language | English |
---|---|
Pages (from-to) | 428-435 |
Journal | Operations Research |
Volume | 67 |
Issue number | 2 |
DOIs | |
Publication status | Published - Mar 2019 |
Fingerprint
Keywords
- F-divergence
- Kullback-Leibler divergence
- heavy tails
- model risk
- robustness
Cite this
}
The joint impact of F-divergences and reference models on the contents of uncertainty sets. / Kruse, Thomas; Schneider, Judith C.; Schweizer, Nikolaus.
In: Operations Research, Vol. 67, No. 2, 03.2019, p. 428-435.Research output: Contribution to journal › Article › Scientific › peer-review
TY - JOUR
T1 - The joint impact of F-divergences and reference models on the contents of uncertainty sets
AU - Kruse, Thomas
AU - Schneider, Judith C.
AU - Schweizer, Nikolaus
PY - 2019/3
Y1 - 2019/3
N2 - In the presence of model risk, it is well-established to replace classical expected values by worst-case expectations over all models within a fixed radius from a given reference model. This is the "robustness" approach. For the class of F-divergences, we provide a careful assessment of how the interplay between reference model and divergence measure shapes the contents of uncertainty sets. We show that the classical divergences, relative entropy and polynomial divergences, are inadequate for reference models which are moderately heavy-tailed such as lognormal models. Worst cases are either infinitely pessimistic, or they rule out the possibility of fat-tailed "power law" models as plausible alternatives. Moreover, we rule out the existence of a single F-divergence which is appropriate regardless of the reference model. Thus, the reference model should not be neglected when settling on any particular divergence measure in the robustness approach.
AB - In the presence of model risk, it is well-established to replace classical expected values by worst-case expectations over all models within a fixed radius from a given reference model. This is the "robustness" approach. For the class of F-divergences, we provide a careful assessment of how the interplay between reference model and divergence measure shapes the contents of uncertainty sets. We show that the classical divergences, relative entropy and polynomial divergences, are inadequate for reference models which are moderately heavy-tailed such as lognormal models. Worst cases are either infinitely pessimistic, or they rule out the possibility of fat-tailed "power law" models as plausible alternatives. Moreover, we rule out the existence of a single F-divergence which is appropriate regardless of the reference model. Thus, the reference model should not be neglected when settling on any particular divergence measure in the robustness approach.
KW - F-divergence
KW - Kullback-Leibler divergence
KW - heavy tails
KW - model risk
KW - robustness
U2 - 10.1287/opre.2018.1807
DO - 10.1287/opre.2018.1807
M3 - Article
VL - 67
SP - 428
EP - 435
JO - Operations Research
JF - Operations Research
SN - 0030-364X
IS - 2
ER -