The joint impact of F-divergences and reference models on the contents of uncertainty sets

Thomas Kruse, Judith C. Schneider, Nikolaus Schweizer

Research output: Contribution to journalArticleScientificpeer-review

Abstract

In the presence of model risk, it is well-established to replace classical expected values by worst-case expectations over all models within a fixed radius from a given reference model. This is the "robustness" approach. For the class of F-divergences, we provide a careful assessment of how the interplay between reference model and divergence measure shapes the contents of uncertainty sets. We show that the classical divergences, relative entropy and polynomial divergences, are inadequate for reference models which are moderately heavy-tailed such as lognormal models. Worst cases are either infinitely pessimistic, or they rule out the possibility of fat-tailed "power law" models as plausible alternatives. Moreover, we rule out the existence of a single F-divergence which is appropriate regardless of the reference model. Thus, the reference model should not be neglected when settling on any particular divergence measure in the robustness approach.
Original languageEnglish
Pages (from-to)428-435
JournalOperations Research
Volume67
Issue number2
DOIs
Publication statusPublished - Mar 2019

Fingerprint

Reference model
Uncertainty
Divergence
Oils and fats
Entropy
Polynomials
Robustness
Model risk
Power law
Expected value
Relative entropy

Keywords

  • F-divergence
  • Kullback-Leibler divergence
  • heavy tails
  • model risk
  • robustness

Cite this

@article{cce1861221924da49f4dd5ee2f2519a5,
title = "The joint impact of F-divergences and reference models on the contents of uncertainty sets",
abstract = "In the presence of model risk, it is well-established to replace classical expected values by worst-case expectations over all models within a fixed radius from a given reference model. This is the {"}robustness{"} approach. For the class of F-divergences, we provide a careful assessment of how the interplay between reference model and divergence measure shapes the contents of uncertainty sets. We show that the classical divergences, relative entropy and polynomial divergences, are inadequate for reference models which are moderately heavy-tailed such as lognormal models. Worst cases are either infinitely pessimistic, or they rule out the possibility of fat-tailed {"}power law{"} models as plausible alternatives. Moreover, we rule out the existence of a single F-divergence which is appropriate regardless of the reference model. Thus, the reference model should not be neglected when settling on any particular divergence measure in the robustness approach.",
keywords = "F-divergence, Kullback-Leibler divergence, heavy tails, model risk, robustness",
author = "Thomas Kruse and Schneider, {Judith C.} and Nikolaus Schweizer",
year = "2019",
month = "3",
doi = "10.1287/opre.2018.1807",
language = "English",
volume = "67",
pages = "428--435",
journal = "Operations Research",
issn = "0030-364X",
publisher = "INFORMS Inst.for Operations Res.and the Management Sciences",
number = "2",

}

The joint impact of F-divergences and reference models on the contents of uncertainty sets. / Kruse, Thomas; Schneider, Judith C.; Schweizer, Nikolaus.

In: Operations Research, Vol. 67, No. 2, 03.2019, p. 428-435.

Research output: Contribution to journalArticleScientificpeer-review

TY - JOUR

T1 - The joint impact of F-divergences and reference models on the contents of uncertainty sets

AU - Kruse, Thomas

AU - Schneider, Judith C.

AU - Schweizer, Nikolaus

PY - 2019/3

Y1 - 2019/3

N2 - In the presence of model risk, it is well-established to replace classical expected values by worst-case expectations over all models within a fixed radius from a given reference model. This is the "robustness" approach. For the class of F-divergences, we provide a careful assessment of how the interplay between reference model and divergence measure shapes the contents of uncertainty sets. We show that the classical divergences, relative entropy and polynomial divergences, are inadequate for reference models which are moderately heavy-tailed such as lognormal models. Worst cases are either infinitely pessimistic, or they rule out the possibility of fat-tailed "power law" models as plausible alternatives. Moreover, we rule out the existence of a single F-divergence which is appropriate regardless of the reference model. Thus, the reference model should not be neglected when settling on any particular divergence measure in the robustness approach.

AB - In the presence of model risk, it is well-established to replace classical expected values by worst-case expectations over all models within a fixed radius from a given reference model. This is the "robustness" approach. For the class of F-divergences, we provide a careful assessment of how the interplay between reference model and divergence measure shapes the contents of uncertainty sets. We show that the classical divergences, relative entropy and polynomial divergences, are inadequate for reference models which are moderately heavy-tailed such as lognormal models. Worst cases are either infinitely pessimistic, or they rule out the possibility of fat-tailed "power law" models as plausible alternatives. Moreover, we rule out the existence of a single F-divergence which is appropriate regardless of the reference model. Thus, the reference model should not be neglected when settling on any particular divergence measure in the robustness approach.

KW - F-divergence

KW - Kullback-Leibler divergence

KW - heavy tails

KW - model risk

KW - robustness

U2 - 10.1287/opre.2018.1807

DO - 10.1287/opre.2018.1807

M3 - Article

VL - 67

SP - 428

EP - 435

JO - Operations Research

JF - Operations Research

SN - 0030-364X

IS - 2

ER -