Gradient estimation using Lagrange interpolation polynomials

Research output: Contribution to journalArticleScientificpeer-review

243 Downloads (Pure)

Abstract

We use Lagrange interpolation polynomials to obtain good gradient estimations. This is e.g. important for nonlinear programming solvers. As an error criterion, we take the mean squared error, which can be split up into a deterministic error and a stochastic error. We analyze these errors using N-times replicated Lagrange interpolation polynomials. We show that the mean squared error is of order N−1+ 1/2d if we replicate the Lagrange estimation procedure N times and use 2d evaluations in each replicate. As a result, the order of the mean squared error converges to N−1 if the number of evaluation points increases to infinity. Moreover, we show that our approach is also useful for deterministic functions in which numerical errors are involved. We provide also an optimal division between the number of gridpoints and replicates in case the number of evaluations is fixed. Further, it is shown that the estimation of the derivatives is more robust when the number of evaluation points is increased. Finally, test results show the practical use of the proposed method. 
Original languageEnglish
Pages (from-to)341-357
JournalJournal of Optimization Theory and Applications
Volume136
Issue number3
Publication statusPublished - 2008

Fingerprint

Gradient Estimation
Lagrange Interpolation
Interpolation
Polynomials
Polynomial
Mean Squared Error
Evaluation
Nonlinear Programming
Lagrange
Division
Infinity
Gradient
Nonlinear programming
Converge
Derivative
Mean squared error
Derivatives

Cite this

@article{f089ae6ad7c24863a52da909888c601a,
title = "Gradient estimation using Lagrange interpolation polynomials",
abstract = "We use Lagrange interpolation polynomials to obtain good gradient estimations. This is e.g. important for nonlinear programming solvers. As an error criterion, we take the mean squared error, which can be split up into a deterministic error and a stochastic error. We analyze these errors using N-times replicated Lagrange interpolation polynomials. We show that the mean squared error is of order N−1+ 1/2d if we replicate the Lagrange estimation procedure N times and use 2d evaluations in each replicate. As a result, the order of the mean squared error converges to N−1 if the number of evaluation points increases to infinity. Moreover, we show that our approach is also useful for deterministic functions in which numerical errors are involved. We provide also an optimal division between the number of gridpoints and replicates in case the number of evaluations is fixed. Further, it is shown that the estimation of the derivatives is more robust when the number of evaluation points is increased. Finally, test results show the practical use of the proposed method. ",
author = "R.C.M. Brekelmans and L. Driessen and H.J.M. Hamers and {den Hertog}, D.",
note = "Appeared earlier as CentER DP 2003-101",
year = "2008",
language = "English",
volume = "136",
pages = "341--357",
journal = "Journal of Optimization Theory and Applications",
issn = "0022-3239",
publisher = "SPRINGER/PLENUM PUBLISHERS",
number = "3",

}

Gradient estimation using Lagrange interpolation polynomials. / Brekelmans, R.C.M.; Driessen, L.; Hamers, H.J.M.; den Hertog, D.

In: Journal of Optimization Theory and Applications, Vol. 136, No. 3, 2008, p. 341-357.

Research output: Contribution to journalArticleScientificpeer-review

TY - JOUR

T1 - Gradient estimation using Lagrange interpolation polynomials

AU - Brekelmans, R.C.M.

AU - Driessen, L.

AU - Hamers, H.J.M.

AU - den Hertog, D.

N1 - Appeared earlier as CentER DP 2003-101

PY - 2008

Y1 - 2008

N2 - We use Lagrange interpolation polynomials to obtain good gradient estimations. This is e.g. important for nonlinear programming solvers. As an error criterion, we take the mean squared error, which can be split up into a deterministic error and a stochastic error. We analyze these errors using N-times replicated Lagrange interpolation polynomials. We show that the mean squared error is of order N−1+ 1/2d if we replicate the Lagrange estimation procedure N times and use 2d evaluations in each replicate. As a result, the order of the mean squared error converges to N−1 if the number of evaluation points increases to infinity. Moreover, we show that our approach is also useful for deterministic functions in which numerical errors are involved. We provide also an optimal division between the number of gridpoints and replicates in case the number of evaluations is fixed. Further, it is shown that the estimation of the derivatives is more robust when the number of evaluation points is increased. Finally, test results show the practical use of the proposed method. 

AB - We use Lagrange interpolation polynomials to obtain good gradient estimations. This is e.g. important for nonlinear programming solvers. As an error criterion, we take the mean squared error, which can be split up into a deterministic error and a stochastic error. We analyze these errors using N-times replicated Lagrange interpolation polynomials. We show that the mean squared error is of order N−1+ 1/2d if we replicate the Lagrange estimation procedure N times and use 2d evaluations in each replicate. As a result, the order of the mean squared error converges to N−1 if the number of evaluation points increases to infinity. Moreover, we show that our approach is also useful for deterministic functions in which numerical errors are involved. We provide also an optimal division between the number of gridpoints and replicates in case the number of evaluations is fixed. Further, it is shown that the estimation of the derivatives is more robust when the number of evaluation points is increased. Finally, test results show the practical use of the proposed method. 

M3 - Article

VL - 136

SP - 341

EP - 357

JO - Journal of Optimization Theory and Applications

JF - Journal of Optimization Theory and Applications

SN - 0022-3239

IS - 3

ER -