Gradient Estimation using Lagrange Interpolation Polynomials

H.J.M. Hamers, R.C.M. Brekelmans, L. Driessen, D. den Hertog

Research output: Working paperDiscussion paperOther research output

440 Downloads (Pure)

Abstract

In this paper we use Lagrange interpolation polynomials to obtain good gradient estimations.This is e.g. important for nonlinear programming solvers.As an error criterion we take the mean squared error.This error can be split up into a deterministic and a stochastic error.We analyze these errors using (N times replicated) Lagrange interpolation polynomials.We show that the mean squared error is of order N-1+ 1 2d if we replicate the Lagrange estimation procedure N times and use 2d evaluations in each replicate.As a result the order of the mean squared error converges to N-1 if the number of evaluation points increases to infinity.Moreover, we show that our approach is also useful for deterministic functions in which numerical errors are involved.Finally, we consider the case of a fixed budget of evaluations.For this situation we provide an optimal division between the number of replicates and the number of evaluations in a replicate.
Original languageEnglish
Place of PublicationTilburg
PublisherOperations research
Number of pages18
Volume2003-101
Publication statusPublished - 2003

Publication series

NameCentER Discussion Paper
Volume2003-101

Keywords

  • estimation
  • interpolation
  • polynomials
  • non linear programming

Fingerprint

Dive into the research topics of 'Gradient Estimation using Lagrange Interpolation Polynomials'. Together they form a unique fingerprint.

Cite this