### Abstract

Original language | English |
---|---|

Place of Publication | Tilburg |

Publisher | Operations research |

Number of pages | 18 |

Volume | 2003-101 |

Publication status | Published - 2003 |

### Publication series

Name | CentER Discussion Paper |
---|---|

Volume | 2003-101 |

### Fingerprint

### Keywords

- estimation
- interpolation
- polynomials
- non linear programming

### Cite this

*Gradient Estimation using Lagrange Interpolation Polynomials*. (CentER Discussion Paper; Vol. 2003-101). Tilburg: Operations research.

}

**Gradient Estimation using Lagrange Interpolation Polynomials.** / Hamers, H.J.M.; Brekelmans, R.C.M.; Driessen, L.; den Hertog, D.

Research output: Working paper › Discussion paper › Other research output

TY - UNPB

T1 - Gradient Estimation using Lagrange Interpolation Polynomials

AU - Hamers, H.J.M.

AU - Brekelmans, R.C.M.

AU - Driessen, L.

AU - den Hertog, D.

N1 - Subsequently published in the Journal of Optimization Theory & Applications, 2008 Pagination: 18

PY - 2003

Y1 - 2003

N2 - In this paper we use Lagrange interpolation polynomials to obtain good gradient estimations.This is e.g. important for nonlinear programming solvers.As an error criterion we take the mean squared error.This error can be split up into a deterministic and a stochastic error.We analyze these errors using (N times replicated) Lagrange interpolation polynomials.We show that the mean squared error is of order N-1+ 1 2d if we replicate the Lagrange estimation procedure N times and use 2d evaluations in each replicate.As a result the order of the mean squared error converges to N-1 if the number of evaluation points increases to infinity.Moreover, we show that our approach is also useful for deterministic functions in which numerical errors are involved.Finally, we consider the case of a fixed budget of evaluations.For this situation we provide an optimal division between the number of replicates and the number of evaluations in a replicate.

AB - In this paper we use Lagrange interpolation polynomials to obtain good gradient estimations.This is e.g. important for nonlinear programming solvers.As an error criterion we take the mean squared error.This error can be split up into a deterministic and a stochastic error.We analyze these errors using (N times replicated) Lagrange interpolation polynomials.We show that the mean squared error is of order N-1+ 1 2d if we replicate the Lagrange estimation procedure N times and use 2d evaluations in each replicate.As a result the order of the mean squared error converges to N-1 if the number of evaluation points increases to infinity.Moreover, we show that our approach is also useful for deterministic functions in which numerical errors are involved.Finally, we consider the case of a fixed budget of evaluations.For this situation we provide an optimal division between the number of replicates and the number of evaluations in a replicate.

KW - estimation

KW - interpolation

KW - polynomials

KW - non linear programming

M3 - Discussion paper

VL - 2003-101

T3 - CentER Discussion Paper

BT - Gradient Estimation using Lagrange Interpolation Polynomials

PB - Operations research

CY - Tilburg

ER -